ResultSUCCESS
Tests 3 failed / 24 succeeded
Started2020-06-09 08:45
Elapsed1h27m
Work namespaceci-op-jkgil6lx
Refs release-4.5:5a547146
41:973f1d60
pod39c4cf58-aa2d-11ea-9b59-0a580a820459
repoopenshift/cluster-csi-snapshot-controller-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 35m5s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 2s of 29m50s (0%):

Jun 09 09:48:44.688 E ns/e2e-k8s-service-lb-available-677 svc/service-test Service stopped responding to GET requests on reused connections
Jun 09 09:48:45.687 - 999ms E ns/e2e-k8s-service-lb-available-677 svc/service-test Service is not responding to GET requests on reused connections
Jun 09 09:48:46.804 I ns/e2e-k8s-service-lb-available-677 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1591696987.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 34m5s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 2s of 34m4s (0%):

Jun 09 09:33:18.605 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-jkgil6lx-6f01d.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 34.194.81.37:6443: connect: connection refused
Jun 09 09:33:19.575 E openshift-apiserver OpenShift API is not responding to GET requests
Jun 09 09:33:19.599 I openshift-apiserver OpenShift API started responding to GET requests
Jun 09 09:53:49.622 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-jkgil6lx-6f01d.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 50.17.6.13:6443: connect: connection refused
Jun 09 09:53:50.575 E openshift-apiserver OpenShift API is not responding to GET requests
Jun 09 09:53:50.605 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1591696987.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 39m19s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
102 error level events were detected during this test run:

Jun 09 09:24:42.647 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-149-222.ec2.internal node/ip-10-0-149-222.ec2.internal container/kube-scheduler container exited with code 255 (Error): /v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=24425&timeoutSeconds=555&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:24:42.262280       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=24443&timeout=9m58s&timeoutSeconds=598&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:24:42.278230       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=18398&timeout=5m5s&timeoutSeconds=305&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:24:42.280774       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=18398&timeout=9m38s&timeoutSeconds=578&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:24:42.281849       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=24569&timeout=7m15s&timeoutSeconds=435&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:24:42.286948       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=18379&timeout=6m58s&timeoutSeconds=418&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0609 09:24:42.503823       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0609 09:24:42.503854       1 server.go:244] leaderelection lost\n
Jun 09 09:24:44.677 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-149-222.ec2.internal node/ip-10-0-149-222.ec2.internal container/kube-controller-manager container exited with code 255 (Error): PartialObjectMetadata: Get https://localhost:6443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=21715&timeout=7m47s&timeoutSeconds=467&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:24:43.537872       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=18378&timeout=6m15s&timeoutSeconds=375&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:24:43.539252       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/operatorhubs?allowWatchBookmarks=true&resourceVersion=21197&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:24:43.540453       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/console.openshift.io/v1/consoleexternalloglinks?allowWatchBookmarks=true&resourceVersion=21405&timeout=6m2s&timeoutSeconds=362&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0609 09:24:43.844458       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0609 09:24:43.844579       1 controllermanager.go:291] leaderelection lost\nI0609 09:24:43.858490       1 expand_controller.go:331] Shutting down expand controller\nI0609 09:24:43.858563       1 clusterroleaggregation_controller.go:161] Shutting down ClusterRoleAggregator\nI0609 09:24:43.858613       1 attach_detach_controller.go:374] Shutting down attach detach controller\nI0609 09:24:43.858642       1 stateful_set.go:158] Shutting down statefulset controller\nI0609 09:24:43.858653       1 disruption.go:348] Shutting down disruption controller\nI0609 09:24:43.858652       1 horizontal.go:180] Shutting down HPA controller\n
Jun 09 09:25:07.751 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-149-222.ec2.internal node/ip-10-0-149-222.ec2.internal container/setup init container exited with code 124 (Error): ................................................................................
Jun 09 09:28:11.514 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 584)
Jun 09 09:29:44.767 E ns/openshift-machine-api pod/machine-api-operator-fdddb55bb-mfqk7 node/ip-10-0-179-148.ec2.internal container/machine-api-operator container exited with code 2 (Error): 
Jun 09 09:29:51.158 E clusteroperator/machine-config changed Degraded to True: MachineConfigDaemonFailed: Failed to resync 0.0.1-2020-06-09-084827 because: etcdserver: leader changed
Jun 09 09:29:59.424 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-149-222.ec2.internal node/ip-10-0-149-222.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): I0609 09:29:57.512473       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0609 09:29:57.516709       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0609 09:29:57.522760       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0609 09:29:57.524099       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 09 09:30:19.549 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-149-222.ec2.internal node/ip-10-0-149-222.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): I0609 09:30:19.262834       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0609 09:30:19.264962       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0609 09:30:19.265533       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0609 09:30:19.266150       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 09 09:31:24.230 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-253-177.ec2.internal node/ip-10-0-253-177.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): I0609 09:31:22.943198       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0609 09:31:22.946549       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0609 09:31:22.946880       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0609 09:31:22.948206       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 09 09:31:46.346 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-253-177.ec2.internal node/ip-10-0-253-177.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): I0609 09:31:45.519282       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0609 09:31:45.521232       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0609 09:31:45.521318       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0609 09:31:45.521989       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 09 09:31:57.290 E ns/openshift-machine-api pod/machine-api-controllers-584d7849c-7l5fn node/ip-10-0-179-148.ec2.internal container/machineset-controller container exited with code 1 (Error): 
Jun 09 09:32:01.245 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-7c74f85889-wqt9f node/ip-10-0-149-222.ec2.internal container/kube-storage-version-migrator-operator container exited with code 1 (Error):      1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:21:25.177455       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:21:25.178557       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:24:31.167739       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:24:31.168457       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:24:31.168980       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:24:31.169419       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:24:31.170128       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:24:31.170656       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:32:00.489184       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0609 09:32:00.489708       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from runtime/asm_amd64.s:1357\nI0609 09:32:00.489857       1 reflector.go:181] Stopping reflector *unstructured.Unstructured (12h0m0s) from runtime/asm_amd64.s:1357\nI0609 09:32:00.489940       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from runtime/asm_amd64.s:1357\nI0609 09:32:00.490032       1 reflector.go:181] Stopping reflector *v1.Deployment (10m0s) from runtime/asm_amd64.s:1357\nI0609 09:32:00.490067       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0609 09:32:00.490085       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nI0609 09:32:00.490106       1 base_controller.go:101] Shutting down LoggingSyncer ...\nW0609 09:32:00.490185       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
Jun 09 09:32:14.310 E ns/openshift-cluster-machine-approver pod/machine-approver-86575bd76c-dk58z node/ip-10-0-149-222.ec2.internal container/machine-approver-controller container exited with code 2 (Error): 388&timeoutSeconds=569&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0609 09:25:39.369594       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: unknown (get clusteroperators.config.openshift.io)\nE0609 09:25:39.369757       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io)\nE0609 09:31:12.406996       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=25008&timeoutSeconds=404&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0609 09:31:12.408447       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=25034&timeoutSeconds=364&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0609 09:31:13.407757       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=25008&timeoutSeconds=546&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0609 09:31:13.409208       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=25034&timeoutSeconds=405&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
Jun 09 09:32:41.657 E ns/openshift-monitoring pod/cluster-monitoring-operator-6f6c6c466b-xqzzv node/ip-10-0-253-177.ec2.internal container/kube-rbac-proxy container exited with code 255 (Error): I0609 09:11:08.950799       1 main.go:186] Valid token audiences: \nI0609 09:11:08.951206       1 main.go:248] Reading certificate files\nF0609 09:11:08.951514       1 main.go:252] Failed to initialize certificate reloader: error loading certificates: error loading certificate: open /etc/tls/private/tls.crt: no such file or directory\n
Jun 09 09:33:04.151 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-179-148.ec2.internal node/ip-10-0-179-148.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): I0609 09:33:02.689513       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0609 09:33:02.702094       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0609 09:33:02.702782       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0609 09:33:02.707725       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 09 09:33:18.171 E kube-apiserver Kube API started failing: Get https://api.ci-op-jkgil6lx-6f01d.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: dial tcp 50.17.6.13:6443: connect: connection refused
Jun 09 09:33:19.141 E kube-apiserver Kube API is not responding to GET requests
Jun 09 09:33:22.237 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-179-148.ec2.internal node/ip-10-0-179-148.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): I0609 09:33:21.535534       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0609 09:33:21.537505       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0609 09:33:21.537542       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0609 09:33:21.538007       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 09 09:33:31.897 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-182-150.ec2.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/06/09 09:17:14 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/06/09 09:23:41 config map updated\n2020/06/09 09:23:41 successfully triggered reload\n
Jun 09 09:33:31.897 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-182-150.ec2.internal container/prometheus-proxy container exited with code 2 (Error): 2020/06/09 09:17:14 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/06/09 09:17:14 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/09 09:17:14 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/09 09:17:14 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/09 09:17:14 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/09 09:17:14 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/06/09 09:17:14 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/09 09:17:14 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/06/09 09:17:14 http.go:107: HTTPS: listening on [::]:9091\nI0609 09:17:14.585683       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/09 09:17:32 oauthproxy.go:774: basicauth: 10.131.0.9:47306 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:22:02 oauthproxy.go:774: basicauth: 10.131.0.9:52632 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:26:32 oauthproxy.go:774: basicauth: 10.131.0.9:58426 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:31:03 oauthproxy.go:774: basicauth: 10.131.0.9:35148 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 09 09:33:31.897 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-182-150.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-06-09T09:17:11.976446275Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-06-09T09:17:11.978518761Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-06-09T09:17:16.978347464Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-06-09T09:17:22.087558362Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-06-09T09:17:22.08763729Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-06-09T09:17:22.191065085Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-06-09T09:20:22.236304534Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-06-09T09:26:22.211849565Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jun 09 09:33:48.587 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-d949bbf69-6cqf8 node/ip-10-0-248-252.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Jun 09 09:33:49.692 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-182-150.ec2.internal container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-09T09:33:37.528Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-09T09:33:37.538Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-09T09:33:37.539Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-09T09:33:37.540Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-09T09:33:37.540Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-09T09:33:37.540Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-09T09:33:37.540Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-09T09:33:37.540Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-09T09:33:37.540Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-09T09:33:37.540Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-09T09:33:37.540Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-09T09:33:37.540Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-09T09:33:37.540Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-09T09:33:37.540Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-09T09:33:37.543Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-09T09:33:37.543Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-09
Jun 09 09:33:50.599 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-248-252.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/06/09 09:16:55 Watching directory: "/etc/alertmanager/config"\n
Jun 09 09:33:50.599 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-248-252.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/06/09 09:16:55 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/09 09:16:55 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/09 09:16:55 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/09 09:16:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/06/09 09:16:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/09 09:16:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/09 09:16:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/09 09:16:55 http.go:107: HTTPS: listening on [::]:9095\nI0609 09:16:55.962109       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jun 09 09:33:51.524 E ns/openshift-monitoring pod/node-exporter-9qkqd node/ip-10-0-149-222.ec2.internal container/node-exporter container exited with code 143 (Error): -09T09:11:22Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-06-09T09:11:22Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jun 09 09:33:53.605 E ns/openshift-monitoring pod/prometheus-adapter-f4bdcb8b9-g87pg node/ip-10-0-248-252.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0609 09:16:51.336612       1 adapter.go:94] successfully using in-cluster auth\nI0609 09:16:53.901353       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0609 09:16:53.901418       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0609 09:16:53.903567       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0609 09:16:53.904156       1 secure_serving.go:178] Serving securely on [::]:6443\nI0609 09:16:53.904469       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Jun 09 09:33:53.649 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-253-177.ec2.internal node/ip-10-0-253-177.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): pis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=27221&timeout=5m43s&timeoutSeconds=343&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:33:52.999229       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30602&timeout=8m48s&timeoutSeconds=528&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:33:53.000761       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30986&timeout=8m54s&timeoutSeconds=534&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:33:53.012273       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Job: Get https://localhost:6443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=25968&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:33:53.013617       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=31194&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0609 09:33:53.014775       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1beta1.Ingress: Get https://localhost:6443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=27647&timeout=8m43s&timeoutSeconds=523&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0609 09:33:53.072299       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0609 09:33:53.072355       1 policy_controller.go:94] leaderelection lost\nI0609 09:33:53.082667       1 reconciliation_controller.go:154] Shutting down ClusterQuotaReconcilationController\n
Jun 09 09:33:54.661 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-253-177.ec2.internal node/ip-10-0-253-177.ec2.internal container/setup init container exited with code 124 (Error): ................................................................................
Jun 09 09:33:57.644 E ns/openshift-monitoring pod/node-exporter-jxxfb node/ip-10-0-248-252.ec2.internal container/node-exporter container exited with code 143 (Error): -09T09:15:16Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-06-09T09:15:16Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jun 09 09:34:06.730 E ns/openshift-monitoring pod/thanos-querier-5d97d456b8-sp4xj node/ip-10-0-248-252.ec2.internal container/oauth-proxy container exited with code 2 (Error): :17:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/09 09:17:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/09 09:17:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/09 09:17:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/09 09:17:03 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/06/09 09:17:03 http.go:107: HTTPS: listening on [::]:9091\nI0609 09:17:03.048766       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/09 09:17:10 oauthproxy.go:774: basicauth: 10.129.0.8:52582 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:19:10 oauthproxy.go:774: basicauth: 10.129.0.8:55352 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:20:10 oauthproxy.go:774: basicauth: 10.129.0.8:56030 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:21:10 oauthproxy.go:774: basicauth: 10.129.0.8:56752 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:22:10 oauthproxy.go:774: basicauth: 10.129.0.8:59204 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:26:10 oauthproxy.go:774: basicauth: 10.129.0.8:46174 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:27:10 oauthproxy.go:774: basicauth: 10.129.0.8:46944 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:31:42 oauthproxy.go:774: basicauth: 10.130.0.46:45324 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:31:42 oauthproxy.go:774: basicauth: 10.130.0.46:45324 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 09 09:34:06.818 E ns/openshift-monitoring pod/node-exporter-8zzlb node/ip-10-0-253-177.ec2.internal container/node-exporter container exited with code 143 (Error): -09T09:11:21Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-06-09T09:11:21Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jun 09 09:34:10.786 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-248-252.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-09T09:34:05.958Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-09T09:34:05.963Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-09T09:34:05.963Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-09T09:34:05.965Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-09T09:34:05.965Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-09T09:34:05.965Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-09T09:34:05.965Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-09T09:34:05.965Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-09T09:34:05.965Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-09T09:34:05.965Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-09T09:34:05.965Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-09T09:34:05.965Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-09T09:34:05.965Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-09T09:34:05.965Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-09T09:34:05.966Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-09T09:34:05.966Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-09
Jun 09 09:34:26.919 E ns/openshift-marketplace pod/certified-operators-564c64c87c-pkh9r node/ip-10-0-248-252.ec2.internal container/certified-operators container exited with code 2 (Error): 
Jun 09 09:34:28.120 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-253-177.ec2.internal node/ip-10-0-253-177.ec2.internal container/kube-scheduler container exited with code 255 (Error): nodes.storage.k8s.io)\nE0609 09:34:25.991990       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0609 09:34:25.992067       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0609 09:34:26.108676       1 reflector.go:382] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0609 09:34:26.109316       1 webhook.go:111] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:kube-scheduler" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\nE0609 09:34:26.109379       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, tokenreviews.authentication.k8s.io is forbidden: User "system:kube-scheduler" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope]\nE0609 09:34:26.117054       1 writers.go:105] apiserver was unable to write a JSON response: no kind is registered for the type v1.Status in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"\nE0609 09:34:26.117133       1 reflector.go:382] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0609 09:34:26.213296       1 status.go:71] apiserver received an error that is not an metav1.Status: &runtime.notRegisteredErr{schemeName:"k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30", gvk:schema.GroupVersionKind{Group:"", Version:"", Kind:""}, target:runtime.GroupVersioner(nil), t:(*reflect.rtype)(0x1aab860)}\nE0609 09:34:27.736202       1 cache.go:513] Pod 474130b2-666f-4154-a41a-d859993836fa updated on a different node than previously added to.\nF0609 09:34:27.736226       1 cache.go:514] Schedulercache is corrupted and can badly affect scheduling decisions\n
Jun 09 09:34:32.936 E ns/openshift-marketplace pod/community-operators-5f5d7d5d64-bcxp4 node/ip-10-0-248-252.ec2.internal container/community-operators container exited with code 2 (Error): 
Jun 09 09:36:59.581 E ns/openshift-sdn pod/sdn-controller-66gm2 node/ip-10-0-179-148.ec2.internal container/sdn-controller container exited with code 2 (Error): I0609 09:06:25.633661       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0609 09:10:51.933646       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-jkgil6lx-6f01d.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Jun 09 09:37:03.270 E ns/openshift-sdn pod/sdn-controller-kpfkb node/ip-10-0-149-222.ec2.internal container/sdn-controller container exited with code 2 (Error): I0609 09:06:09.184857       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0609 09:10:51.939209       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-jkgil6lx-6f01d.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Jun 09 09:37:06.859 E ns/openshift-sdn pod/sdn-controller-drp8k node/ip-10-0-253-177.ec2.internal container/sdn-controller container exited with code 2 (Error): s.go:116] Allocated netid 16299209 for namespace "e2e-frontend-ingress-available-5876"\nI0609 09:23:49.746342       1 vnids.go:116] Allocated netid 15601529 for namespace "e2e-check-for-critical-alerts-5803"\nI0609 09:23:49.769691       1 vnids.go:116] Allocated netid 11518336 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-334"\nI0609 09:23:49.804900       1 vnids.go:116] Allocated netid 16105327 for namespace "e2e-openshift-api-available-1055"\nI0609 09:23:49.828199       1 vnids.go:116] Allocated netid 2785950 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-6533"\nI0609 09:23:49.849699       1 vnids.go:116] Allocated netid 1823320 for namespace "e2e-k8s-sig-apps-deployment-upgrade-2009"\nI0609 09:23:49.868748       1 vnids.go:116] Allocated netid 2991113 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-1774"\nI0609 09:23:49.894316       1 vnids.go:116] Allocated netid 7196081 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-9148"\nI0609 09:23:49.922910       1 vnids.go:116] Allocated netid 3483309 for namespace "e2e-kubernetes-api-available-5308"\nI0609 09:33:17.712632       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:33:17.712631       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:33:17.713020       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:33:17.713167       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:36:32.147361       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:36:32.148426       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:36:32.149615       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0609 09:36:32.149906       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jun 09 09:37:08.620 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-179-148.ec2.internal node/ip-10-0-179-148.ec2.internal container/setup init container exited with code 124 (Error): ................................................................................
Jun 09 09:37:09.513 E ns/openshift-sdn pod/sdn-fm4t6 node/ip-10-0-182-150.ec2.internal container/sdn container exited with code 255 (Error): undrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.149.222:6443 10.0.253.177:6443]\nI0609 09:35:22.116019    2306 roundrobin.go:217] Delete endpoint 10.0.179.148:6443 for service "openshift-kube-apiserver/apiserver:https"\nI0609 09:35:22.161168    2306 roundrobin.go:267] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.149.222:6443 10.0.253.177:6443]\nI0609 09:35:22.161222    2306 roundrobin.go:217] Delete endpoint 10.0.179.148:6443 for service "default/kubernetes:https"\nI0609 09:35:22.276752    2306 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:35:22.277401    2306 proxier.go:349] userspace syncProxyRules took 29.570286ms\nI0609 09:35:22.425915    2306 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:35:22.426538    2306 proxier.go:349] userspace syncProxyRules took 29.822347ms\nI0609 09:36:56.505928    2306 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.3:8443 10.129.0.14:8443]\nI0609 09:36:56.505979    2306 roundrobin.go:217] Delete endpoint 10.130.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0609 09:36:56.506001    2306 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.3:6443 10.129.0.14:6443]\nI0609 09:36:56.506014    2306 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0609 09:36:56.664218    2306 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:36:56.664903    2306 proxier.go:349] userspace syncProxyRules took 30.13851ms\nI0609 09:37:08.702944    2306 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0609 09:37:08.702982    2306 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 09 09:37:27.381 E ns/openshift-multus pod/multus-pxljb node/ip-10-0-182-150.ec2.internal container/kube-multus container exited with code 137 (Error): 
Jun 09 09:37:28.384 E ns/openshift-sdn pod/sdn-f7744 node/ip-10-0-149-222.ec2.internal container/sdn container exited with code 255 (Error): 1    2212 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.3:6443 10.129.0.14:6443]\nI0609 09:36:56.506560    2212 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0609 09:36:56.506639    2212 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.3:8443 10.129.0.14:8443]\nI0609 09:36:56.506689    2212 roundrobin.go:217] Delete endpoint 10.130.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0609 09:36:56.682858    2212 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:36:56.683398    2212 proxier.go:349] userspace syncProxyRules took 30.569357ms\nI0609 09:37:16.666331    2212 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.149.222:10257 10.0.253.177:10257]\nI0609 09:37:16.666380    2212 roundrobin.go:217] Delete endpoint 10.0.179.148:10257 for service "openshift-kube-controller-manager/kube-controller-manager:https"\nI0609 09:37:16.836147    2212 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:37:16.836693    2212 proxier.go:349] userspace syncProxyRules took 32.584247ms\nI0609 09:37:17.678877    2212 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.149.222:10257 10.0.179.148:10257 10.0.253.177:10257]\nI0609 09:37:17.930049    2212 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:37:17.931079    2212 proxier.go:349] userspace syncProxyRules took 60.227905ms\nI0609 09:37:27.586554    2212 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0609 09:37:27.586656    2212 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 09 09:37:51.167 E ns/openshift-sdn pod/sdn-d6xcf node/ip-10-0-253-177.ec2.internal container/sdn container exited with code 255 (Error): 1 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.3:6443 10.129.0.14:6443 10.130.0.73:6443]\nI0609 09:37:45.130735   93891 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.3:8443 10.129.0.14:8443 10.130.0.73:8443]\nI0609 09:37:45.166293   93891 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.14:6443 10.130.0.73:6443]\nI0609 09:37:45.166325   93891 roundrobin.go:217] Delete endpoint 10.128.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0609 09:37:45.166337   93891 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.14:8443 10.130.0.73:8443]\nI0609 09:37:45.166345   93891 roundrobin.go:217] Delete endpoint 10.128.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0609 09:37:45.306303   93891 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:37:45.307170   93891 proxier.go:349] userspace syncProxyRules took 36.632325ms\nI0609 09:37:45.459678   93891 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:37:45.460384   93891 proxier.go:349] userspace syncProxyRules took 31.86656ms\nI0609 09:37:47.597068   93891 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.149.222:6443 10.0.179.148:6443 10.0.253.177:6443]\nI0609 09:37:47.757600   93891 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:37:47.758309   93891 proxier.go:349] userspace syncProxyRules took 32.74958ms\nI0609 09:37:50.436562   93891 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0609 09:37:50.436605   93891 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 09 09:38:11.541 E ns/openshift-sdn pod/sdn-cncnl node/ip-10-0-178-61.ec2.internal container/sdn container exited with code 255 (Error): rator-metrics:https-metrics" at 172.30.226.69:8081/TCP\nI0609 09:38:05.034896   44304 service.go:379] Adding new service port "openshift-authentication-operator/metrics:https" at 172.30.219.140:443/TCP\nI0609 09:38:05.034914   44304 service.go:379] Adding new service port "openshift-monitoring/thanos-querier:web" at 172.30.82.57:9091/TCP\nI0609 09:38:05.034938   44304 service.go:379] Adding new service port "openshift-monitoring/thanos-querier:tenancy" at 172.30.82.57:9092/TCP\nI0609 09:38:05.035192   44304 proxier.go:813] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0609 09:38:05.124402   44304 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:38:05.125138   44304 proxier.go:349] userspace syncProxyRules took 92.310011ms\nI0609 09:38:05.132878   44304 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:38:05.133568   44304 proxier.go:349] userspace syncProxyRules took 100.580021ms\nI0609 09:38:05.184386   44304 proxier.go:1656] Opened local port "nodePort for openshift-ingress/router-default:http" (:30491/tcp)\nI0609 09:38:05.184618   44304 proxier.go:1656] Opened local port "nodePort for openshift-ingress/router-default:https" (:31169/tcp)\nI0609 09:38:05.185223   44304 proxier.go:1656] Opened local port "nodePort for e2e-k8s-service-lb-available-677/service-test:" (:31248/tcp)\nI0609 09:38:05.223299   44304 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30237\nI0609 09:38:05.231362   44304 proxy.go:311] openshift-sdn proxy services and endpoints initialized\nI0609 09:38:05.231400   44304 cmd.go:172] openshift-sdn network plugin registering startup\nI0609 09:38:05.231517   44304 cmd.go:176] openshift-sdn network plugin ready\nI0609 09:38:11.404780   44304 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0609 09:38:11.404824   44304 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 09 09:38:16.034 E ns/openshift-multus pod/multus-admission-controller-m5n54 node/ip-10-0-179-148.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Jun 09 09:38:18.008 E ns/openshift-multus pod/multus-tbt89 node/ip-10-0-149-222.ec2.internal container/kube-multus container exited with code 137 (Error): 
Jun 09 09:38:37.156 E ns/openshift-sdn pod/sdn-8x66d node/ip-10-0-179-148.ec2.internal container/sdn container exited with code 255 (Error):  09:38:00.914333   91178 pod.go:504] CNI_ADD openshift-kube-apiserver/revision-pruner-9-ip-10-0-179-148.ec2.internal got IP 10.128.0.63, ofport 64\nI0609 09:38:07.151723   91178 pod.go:540] CNI_DEL openshift-kube-apiserver/revision-pruner-9-ip-10-0-179-148.ec2.internal\nI0609 09:38:15.581895   91178 pod.go:540] CNI_DEL openshift-multus/multus-admission-controller-m5n54\nI0609 09:38:17.571682   91178 pod.go:504] CNI_ADD openshift-multus/multus-admission-controller-8q4rq got IP 10.128.0.64, ofport 65\nI0609 09:38:21.054623   91178 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.64:6443 10.129.0.14:6443 10.130.0.73:6443]\nI0609 09:38:21.054844   91178 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.64:8443 10.129.0.14:8443 10.130.0.73:8443]\nI0609 09:38:21.072546   91178 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.64:6443 10.130.0.73:6443]\nI0609 09:38:21.072574   91178 roundrobin.go:217] Delete endpoint 10.129.0.14:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0609 09:38:21.072592   91178 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.64:8443 10.130.0.73:8443]\nI0609 09:38:21.072604   91178 roundrobin.go:217] Delete endpoint 10.129.0.14:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0609 09:38:21.215287   91178 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:38:21.215891   91178 proxier.go:349] userspace syncProxyRules took 32.966282ms\nI0609 09:38:21.393775   91178 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:38:21.394403   91178 proxier.go:349] userspace syncProxyRules took 31.377863ms\nF0609 09:38:36.362308   91178 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Jun 09 09:38:52.151 E ns/openshift-multus pod/multus-admission-controller-ljj7s node/ip-10-0-149-222.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Jun 09 09:39:02.440 E ns/openshift-sdn pod/sdn-7fndb node/ip-10-0-248-252.ec2.internal container/sdn container exited with code 255 (Error): o:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.64:8443 10.129.0.14:8443 10.130.0.73:8443]\nI0609 09:38:21.072365   97746 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.64:6443 10.130.0.73:6443]\nI0609 09:38:21.072391   97746 roundrobin.go:217] Delete endpoint 10.129.0.14:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0609 09:38:21.072409   97746 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.64:8443 10.130.0.73:8443]\nI0609 09:38:21.072423   97746 roundrobin.go:217] Delete endpoint 10.129.0.14:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0609 09:38:21.191671   97746 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:38:21.191997   97746 proxier.go:349] userspace syncProxyRules took 27.914623ms\nI0609 09:38:21.326515   97746 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:38:21.326842   97746 proxier.go:349] userspace syncProxyRules took 30.79319ms\nI0609 09:39:00.186784   97746 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.64:6443 10.129.0.81:6443 10.130.0.73:6443]\nI0609 09:39:00.186821   97746 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.64:8443 10.129.0.81:8443 10.130.0.73:8443]\nI0609 09:39:00.321248   97746 proxier.go:370] userspace proxy: processing 0 service events\nI0609 09:39:00.321597   97746 proxier.go:349] userspace syncProxyRules took 27.551126ms\nI0609 09:39:02.292306   97746 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0609 09:39:02.292352   97746 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 09 09:40:33.643 E ns/openshift-multus pod/multus-9rf54 node/ip-10-0-248-252.ec2.internal container/kube-multus container exited with code 137 (Error): 
Jun 09 09:41:22.709 E ns/openshift-multus pod/multus-rthcb node/ip-10-0-179-148.ec2.internal container/kube-multus container exited with code 137 (Error): 
Jun 09 09:41:50.783 E ns/openshift-machine-config-operator pod/machine-config-operator-5879448d-fb977 node/ip-10-0-149-222.ec2.internal container/machine-config-operator container exited with code 2 (Error): ingup mode\nE0609 09:06:55.903244       1 reflector.go:178] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0609 09:06:55.947816       1 reflector.go:178] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nI0609 09:06:56.403957       1 operator.go:265] Starting MachineConfigOperator\nI0609 09:06:56.412590       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"69becd8a-5dbb-49ea-ac91-1587fb1b7ae6", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [] to [{operator 0.0.1-2020-06-09-084827}]\nI0609 09:06:58.285555       1 sync.go:62] [init mode] synced RenderConfig in 1.859318202s\nI0609 09:06:58.405149       1 sync.go:62] [init mode] synced MachineConfigPools in 109.94553ms\nI0609 09:07:40.541259       1 sync.go:62] [init mode] synced MachineConfigDaemon in 42.132580927s\nI0609 09:07:46.613694       1 sync.go:62] [init mode] synced MachineConfigController in 6.067082442s\nI0609 09:07:51.690862       1 sync.go:62] [init mode] synced MachineConfigServer in 5.072470204s\nI0609 09:08:00.703012       1 sync.go:62] [init mode] synced RequiredPools in 9.008594055s\nI0609 09:08:00.740706       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"69becd8a-5dbb-49ea-ac91-1587fb1b7ae6", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator version changed from [] to [{operator 0.0.1-2020-06-09-084827}]\nI0609 09:08:01.097592       1 sync.go:93] Initialization complete\n
Jun 09 09:43:49.199 E ns/openshift-machine-config-operator pod/machine-config-daemon-zqxtl node/ip-10-0-179-148.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Jun 09 09:44:00.444 E ns/openshift-machine-config-operator pod/machine-config-daemon-jz2wj node/ip-10-0-182-150.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Jun 09 09:44:10.134 E ns/openshift-machine-config-operator pod/machine-config-daemon-8blk6 node/ip-10-0-248-252.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Jun 09 09:44:47.996 E ns/openshift-machine-config-operator pod/machine-config-daemon-v9t4n node/ip-10-0-253-177.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Jun 09 09:47:34.135 E ns/openshift-machine-config-operator pod/machine-config-server-lgmvw node/ip-10-0-149-222.ec2.internal container/machine-config-server container exited with code 2 (Error): I0609 09:07:50.666930       1 start.go:38] Version: machine-config-daemon-4.5.0-202005291037-2-g293f78b6-dirty (293f78b64d86f9f0491f6baa991e3f0c8fe1b046)\nI0609 09:07:50.667826       1 api.go:56] Launching server on :22624\nI0609 09:07:50.667999       1 api.go:56] Launching server on :22623\nI0609 09:12:39.570409       1 api.go:102] Pool worker requested by 10.0.164.124:56279\n
Jun 09 09:47:46.227 E ns/openshift-kube-storage-version-migrator pod/migrator-fdcb76978-8fb8d node/ip-10-0-178-61.ec2.internal container/migrator container exited with code 2 (Error): 
Jun 09 09:47:46.247 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-747db74985-5stkp node/ip-10-0-178-61.ec2.internal container/operator container exited with code 255 (Error):  at 30.761423ms\nI0609 09:42:00.339096       1 operator.go:146] Starting syncing operator at 2020-06-09 09:42:00.339085046 +0000 UTC m=+571.734500807\nI0609 09:42:00.860945       1 operator.go:148] Finished syncing operator at 521.850415ms\nI0609 09:47:02.193347       1 operator.go:146] Starting syncing operator at 2020-06-09 09:47:02.193323513 +0000 UTC m=+873.588739236\nI0609 09:47:02.225062       1 operator.go:148] Finished syncing operator at 31.726618ms\nI0609 09:47:02.225123       1 operator.go:146] Starting syncing operator at 2020-06-09 09:47:02.22511585 +0000 UTC m=+873.620531703\nI0609 09:47:02.246506       1 operator.go:148] Finished syncing operator at 21.376768ms\nI0609 09:47:02.991346       1 operator.go:146] Starting syncing operator at 2020-06-09 09:47:02.991335388 +0000 UTC m=+874.386751142\nI0609 09:47:03.041020       1 operator.go:148] Finished syncing operator at 49.674444ms\nI0609 09:47:03.085422       1 operator.go:146] Starting syncing operator at 2020-06-09 09:47:03.085411871 +0000 UTC m=+874.480827542\nI0609 09:47:03.110049       1 operator.go:148] Finished syncing operator at 24.626844ms\nI0609 09:47:03.181866       1 operator.go:146] Starting syncing operator at 2020-06-09 09:47:03.181855101 +0000 UTC m=+874.577270925\nI0609 09:47:03.206064       1 operator.go:148] Finished syncing operator at 24.196247ms\nI0609 09:47:03.283496       1 operator.go:146] Starting syncing operator at 2020-06-09 09:47:03.283485223 +0000 UTC m=+874.678900893\nI0609 09:47:03.805958       1 operator.go:148] Finished syncing operator at 522.462446ms\nI0609 09:47:44.142908       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0609 09:47:44.145769       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0609 09:47:44.145792       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0609 09:47:44.145806       1 logging_controller.go:93] Shutting down LogLevelController\nF0609 09:47:44.145885       1 builder.go:243] stopped\n
Jun 09 09:47:51.271 E ns/openshift-machine-config-operator pod/machine-config-server-bllxs node/ip-10-0-179-148.ec2.internal container/machine-config-server container exited with code 2 (Error): I0609 09:07:47.502977       1 start.go:38] Version: machine-config-daemon-4.5.0-202005291037-2-g293f78b6-dirty (293f78b64d86f9f0491f6baa991e3f0c8fe1b046)\nI0609 09:07:47.504106       1 api.go:56] Launching server on :22624\nI0609 09:07:47.504163       1 api.go:56] Launching server on :22623\nI0609 09:12:28.262316       1 api.go:102] Pool worker requested by 10.0.215.79:37586\n
Jun 09 09:47:55.946 E ns/openshift-machine-api pod/machine-api-controllers-7994d6fc49-6pxsc node/ip-10-0-253-177.ec2.internal container/machineset-controller container exited with code 1 (Error): 
Jun 09 09:48:07.123 E ns/openshift-machine-config-operator pod/machine-config-server-rld6c node/ip-10-0-253-177.ec2.internal container/machine-config-server container exited with code 2 (Error): I0609 09:07:47.592696       1 start.go:38] Version: machine-config-daemon-4.5.0-202005291037-2-g293f78b6-dirty (293f78b64d86f9f0491f6baa991e3f0c8fe1b046)\nI0609 09:07:47.593689       1 api.go:56] Launching server on :22624\nI0609 09:07:47.593804       1 api.go:56] Launching server on :22623\nI0609 09:12:23.056384       1 api.go:102] Pool worker requested by 10.0.164.124:50768\n
Jun 09 09:48:15.083 E ns/e2e-k8s-sig-apps-job-upgrade-8578 pod/foo-smrcp node/ip-10-0-178-61.ec2.internal container/c container exited with code 137 (Error): 
Jun 09 09:48:15.108 E ns/e2e-k8s-sig-apps-job-upgrade-8578 pod/foo-lvnw7 node/ip-10-0-178-61.ec2.internal container/c container exited with code 137 (Error): 
Jun 09 09:48:30.131 E ns/e2e-k8s-service-lb-available-677 pod/service-test-245wk node/ip-10-0-178-61.ec2.internal container/netexec container exited with code 2 (Error): 
Jun 09 09:48:45.743 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: reconciling Grafana Config Secret failed: updating Secret object failed: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
Jun 09 09:48:49.272 E ns/openshift-marketplace pod/redhat-operators-69b4d498bc-smb9b node/ip-10-0-182-150.ec2.internal container/redhat-operators container exited with code 2 (Error): 
Jun 09 09:49:01.322 E ns/openshift-marketplace pod/community-operators-6697d8d5f9-r8rw9 node/ip-10-0-182-150.ec2.internal container/community-operators container exited with code 2 (Error): 
Jun 09 09:49:06.889 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jun 09 09:49:54.760 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Jun 09 09:50:13.684 E ns/openshift-monitoring pod/openshift-state-metrics-58987c46-jn5tg node/ip-10-0-182-150.ec2.internal container/openshift-state-metrics container exited with code 2 (Error): 
Jun 09 09:50:14.806 E ns/openshift-monitoring pod/prometheus-adapter-64bfd59bbb-nl74x node/ip-10-0-182-150.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0609 09:33:37.542007       1 adapter.go:94] successfully using in-cluster auth\nI0609 09:33:47.683853       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0609 09:33:47.683910       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0609 09:33:47.684235       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0609 09:33:47.686357       1 secure_serving.go:178] Serving securely on [::]:6443\nI0609 09:33:47.687274       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Jun 09 09:50:14.851 E ns/openshift-monitoring pod/kube-state-metrics-74856bb6f6-sqnf9 node/ip-10-0-182-150.ec2.internal container/kube-state-metrics container exited with code 2 (Error): 
Jun 09 09:50:15.057 E ns/openshift-monitoring pod/thanos-querier-55d6cd9468-gb7xf node/ip-10-0-182-150.ec2.internal container/oauth-proxy container exited with code 2 (Error): hproxy.go:774: basicauth: 10.130.0.46:33346 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:36:43 oauthproxy.go:774: basicauth: 10.130.0.46:34984 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:37:52 oauthproxy.go:774: basicauth: 10.130.0.46:36152 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:37:52 oauthproxy.go:774: basicauth: 10.130.0.46:36152 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:38:42 oauthproxy.go:774: basicauth: 10.130.0.46:37046 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:38:42 oauthproxy.go:774: basicauth: 10.130.0.46:37046 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:40:42 oauthproxy.go:774: basicauth: 10.130.0.46:38970 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:41:42 oauthproxy.go:774: basicauth: 10.130.0.46:39828 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:42:42 oauthproxy.go:774: basicauth: 10.130.0.46:40824 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:43:42 oauthproxy.go:774: basicauth: 10.130.0.46:41666 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:44:42 oauthproxy.go:774: basicauth: 10.130.0.46:42632 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:46:42 oauthproxy.go:774: basicauth: 10.130.0.46:44604 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:47:52 oauthproxy.go:774: basicauth: 10.128.0.68:50558 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/09 09:49:52 oauthproxy.go:774: basicauth: 10.128.0.68:57996 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 09 09:50:15.150 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-182-150.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/06/09 09:33:43 Watching directory: "/etc/alertmanager/config"\n
Jun 09 09:50:15.150 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-182-150.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/06/09 09:33:47 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/09 09:33:47 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/09 09:33:47 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/09 09:33:47 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/06/09 09:33:47 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/09 09:33:47 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/09 09:33:47 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/09 09:33:47 http.go:107: HTTPS: listening on [::]:9095\nI0609 09:33:47.327161       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/09 09:37:10 reverseproxy.go:437: http: proxy error: context canceled\n
Jun 09 09:50:36.899 E ns/openshift-console pod/console-56b88c65f7-kz9p9 node/ip-10-0-149-222.ec2.internal container/console container exited with code 2 (Error): 2020-06-09T09:34:16Z cmd/main: cookies are secure!\n2020-06-09T09:34:16Z cmd/main: Binding to [::]:8443...\n2020-06-09T09:34:16Z cmd/main: using TLS\n
Jun 09 09:50:38.218 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-178-61.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-09T09:50:34.366Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-09T09:50:34.369Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-09T09:50:34.370Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-09T09:50:34.370Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-09T09:50:34.370Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-09T09:50:34.370Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-09T09:50:34.370Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-09T09:50:34.371Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-09T09:50:34.371Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-09T09:50:34.371Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-09T09:50:34.371Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-09T09:50:34.371Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-09T09:50:34.371Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-09T09:50:34.371Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-09T09:50:34.373Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-09T09:50:34.373Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-09
Jun 09 09:50:48.336 E kube-apiserver Kube API started failing: etcdserver: leader changed
Jun 09 09:50:49.141 E kube-apiserver Kube API is not responding to GET requests
Jun 09 09:50:49.141 E openshift-apiserver OpenShift API is not responding to GET requests
Jun 09 09:50:57.821 E ns/e2e-k8s-service-lb-available-677 pod/service-test-zx2wk node/ip-10-0-182-150.ec2.internal container/netexec container exited with code 2 (Error): 
Jun 09 09:51:18.338 E ns/openshift-marketplace pod/redhat-marketplace-7c7c66bd7-2627l node/ip-10-0-248-252.ec2.internal container/redhat-marketplace container exited with code 2 (Error): 
Jun 09 09:51:18.661 E clusteroperator/monitoring changed Degraded to True: UpdatingprometheusAdapterFailed: Failed to rollout the stack. Error: running task Updating prometheus-adapter failed: failed to load prometheus-adapter-tls secret: etcdserver: leader changed
Jun 09 09:51:20.347 E ns/openshift-marketplace pod/redhat-operators-694bcb95ff-z65dt node/ip-10-0-248-252.ec2.internal container/redhat-operators container exited with code 2 (Error): 
Jun 09 09:51:20.376 E ns/openshift-marketplace pod/community-operators-7c57cf8bc6-xfr6s node/ip-10-0-248-252.ec2.internal container/community-operators container exited with code 2 (Error): 
Jun 09 09:51:22.355 E ns/openshift-marketplace pod/certified-operators-5565c7cc95-cldms node/ip-10-0-248-252.ec2.internal container/certified-operators container exited with code 2 (Error): 
Jun 09 09:51:59.949 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jun 09 09:52:04.157 E ns/openshift-console pod/console-56b88c65f7-fc5t6 node/ip-10-0-253-177.ec2.internal container/console container exited with code 2 (Error): 2020-06-09T09:50:17Z cmd/main: cookies are secure!\n2020-06-09T09:50:18Z cmd/main: Binding to [::]:8443...\n2020-06-09T09:50:18Z cmd/main: using TLS\n
Jun 09 09:52:31.469 E ns/openshift-console pod/console-56b88c65f7-bjcdn node/ip-10-0-179-148.ec2.internal container/console container exited with code 2 (Error): 2020-06-09T09:47:51Z cmd/main: cookies are secure!\n2020-06-09T09:47:51Z cmd/main: Binding to [::]:8443...\n2020-06-09T09:47:51Z cmd/main: using TLS\n2020-06-09T09:52:03Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Jun 09 09:52:47.350 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-182-150.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-09T09:52:45.385Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-09T09:52:45.387Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-09T09:52:45.388Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-09T09:52:45.389Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-09T09:52:45.389Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-09T09:52:45.389Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-09T09:52:45.389Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-09T09:52:45.389Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-09T09:52:45.389Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-09T09:52:45.389Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-09T09:52:45.389Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-09T09:52:45.389Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-09T09:52:45.389Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-09T09:52:45.389Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-09T09:52:45.394Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-09T09:52:45.394Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-09
Jun 09 09:53:04.088 E ns/e2e-k8s-sig-apps-job-upgrade-8578 pod/foo-xlbm9 node/ip-10-0-248-252.ec2.internal container/c container exited with code 137 (Error): 
Jun 09 09:53:05.094 E ns/e2e-k8s-sig-apps-job-upgrade-8578 pod/foo-b8p8f node/ip-10-0-248-252.ec2.internal container/c container exited with code 137 (Error): 
Jun 09 09:53:17.621 E ns/openshift-machine-config-operator pod/machine-config-operator-8478656b54-d56bt node/ip-10-0-179-148.ec2.internal container/machine-config-operator container exited with code 2 (Error): :"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"88a1fd11-7706-4407-81c9-1ceeb398d75c", ResourceVersion:"37904", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727290415, loc:(*time.Location)(0x25203c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-8478656b54-d56bt_4a6b67f7-1cb3-44a7-b01b-e6de01e83ffd\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-06-09T09:43:47Z\",\"renewTime\":\"2020-06-09T09:43:47Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"machine-config-operator", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00035e3a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00035e3c0)}}}, Immutable:(*bool)(nil), Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-8478656b54-d56bt_4a6b67f7-1cb3-44a7-b01b-e6de01e83ffd became leader'\nI0609 09:43:47.110733       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0609 09:43:47.746913       1 operator.go:265] Starting MachineConfigOperator\nI0609 09:43:47.755610       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"69becd8a-5dbb-49ea-ac91-1587fb1b7ae6", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-06-09-084827}] to [{operator 0.0.1-2020-06-09-085416}]\n
Jun 09 09:53:19.387 E ns/openshift-machine-api pod/machine-api-controllers-7994d6fc49-hhbxb node/ip-10-0-179-148.ec2.internal container/machineset-controller container exited with code 1 (Error): 
Jun 09 09:53:20.126 E ns/e2e-k8s-service-lb-available-677 pod/service-test-797kl node/ip-10-0-248-252.ec2.internal container/netexec container exited with code 2 (Error): 
Jun 09 09:53:37.236 E ns/openshift-console pod/console-57fbf87c86-xbjz9 node/ip-10-0-179-148.ec2.internal container/console container exited with code 2 (Error): 2020-06-09T09:51:41Z cmd/main: cookies are secure!\n2020-06-09T09:51:46Z auth: error contacting auth provider (retrying in 10s): Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-06-09T09:51:56Z cmd/main: Binding to [::]:8443...\n2020-06-09T09:51:56Z cmd/main: using TLS\n
Jun 09 09:53:51.213 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-794b6897fb-xmz8b node/ip-10-0-149-222.ec2.internal container/cloud-credential-operator container exited with code 1 (Error): Copying system trust bundle\ntime="2020-06-09T09:53:50Z" level=debug msg="debug logging enabled"\ntime="2020-06-09T09:53:50Z" level=info msg="setting up client for manager"\ntime="2020-06-09T09:53:50Z" level=info msg="setting up manager"\ntime="2020-06-09T09:53:50Z" level=fatal msg="unable to set up overall controller manager" error="Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Jun 09 09:53:53.195 E ns/openshift-monitoring pod/prometheus-operator-5cb5db8677-7wjcs node/ip-10-0-149-222.ec2.internal container/prometheus-operator container exited with code 1 (Error): ts=2020-06-09T09:53:45.761088214Z caller=main.go:221 msg="Starting Prometheus Operator version '0.38.1'."\nts=2020-06-09T09:53:45.791172252Z caller=main.go:105 msg="Starting insecure server on [::]:8080"\nlevel=info ts=2020-06-09T09:53:45.804077786Z caller=operator.go:454 component=prometheusoperator msg="connection established" cluster-version=v1.18.3\nlevel=info ts=2020-06-09T09:53:45.8205685Z caller=operator.go:294 component=thanosoperator msg="connection established" cluster-version=v1.18.3\nlevel=info ts=2020-06-09T09:53:45.840575673Z caller=operator.go:214 component=alertmanageroperator msg="connection established" cluster-version=v1.18.3\nlevel=info ts=2020-06-09T09:53:46.659994586Z caller=operator.go:701 component=thanosoperator msg="CRD updated" crd=ThanosRuler\nlevel=info ts=2020-06-09T09:53:46.686187634Z caller=operator.go:655 component=alertmanageroperator msg="CRD updated" crd=Alertmanager\nlevel=info ts=2020-06-09T09:53:46.740288951Z caller=operator.go:1920 component=prometheusoperator msg="CRD updated" crd=Prometheus\nlevel=info ts=2020-06-09T09:53:46.970029916Z caller=operator.go:1920 component=prometheusoperator msg="CRD updated" crd=ServiceMonitor\nlevel=info ts=2020-06-09T09:53:47.004458301Z caller=operator.go:1920 component=prometheusoperator msg="CRD updated" crd=PodMonitor\nlevel=info ts=2020-06-09T09:53:47.226921608Z caller=operator.go:1920 component=prometheusoperator msg="CRD updated" crd=PrometheusRule\nts=2020-06-09T09:53:49.866907168Z caller=main.go:389 msg="Unhandled error received. Exiting..." err="creating CRDs failed: waiting for ThanosRuler crd failed: timed out waiting for Custom Resource: failed to list CRD: Get https://172.30.0.1:443/apis/monitoring.coreos.com/v1/namespaces/openshift-monitoring/prometheuses?limit=500: dial tcp 172.30.0.1:443: connect: connection refused"\n
Jun 09 09:53:56.385 E ns/openshift-monitoring pod/prometheus-operator-5cb5db8677-7wjcs node/ip-10-0-149-222.ec2.internal container/prometheus-operator container exited with code 1 (Error): ts=2020-06-09T09:53:54.574722572Z caller=main.go:221 msg="Starting Prometheus Operator version '0.38.1'."\nts=2020-06-09T09:53:54.874370249Z caller=main.go:105 msg="Starting insecure server on [::]:8080"\nlevel=info ts=2020-06-09T09:53:55.123923369Z caller=operator.go:214 component=alertmanageroperator msg="connection established" cluster-version=v1.18.3\nlevel=info ts=2020-06-09T09:53:55.134758227Z caller=operator.go:454 component=prometheusoperator msg="connection established" cluster-version=v1.18.3\nlevel=info ts=2020-06-09T09:53:55.136572635Z caller=operator.go:294 component=thanosoperator msg="connection established" cluster-version=v1.18.3\nts=2020-06-09T09:53:55.947405971Z caller=main.go:389 msg="Unhandled error received. Exiting..." err="creating CRDs failed: getting CRD: ThanosRuler: customresourcedefinitions.apiextensions.k8s.io \"thanosrulers.monitoring.coreos.com\" is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-operator\" cannot get resource \"customresourcedefinitions\" in API group \"apiextensions.k8s.io\" at the cluster scope"\n
Jun 09 09:54:59.801 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available