ResultSUCCESS
Tests 3 failed / 23 succeeded
Started2020-09-18 14:31
Elapsed1h45m
Work namespaceci-op-9f2qg221
pod9e7c178f-f9bb-11ea-a1fd-0a580a800db2
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 50m32s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 6s of 46m50s (0%):

Sep 18 15:24:54.811 E ns/e2e-k8s-service-lb-available-6677 svc/service-test Service stopped responding to GET requests over new connections
Sep 18 15:24:55.811 - 999ms E ns/e2e-k8s-service-lb-available-6677 svc/service-test Service is not responding to GET requests over new connections
Sep 18 15:24:57.235 I ns/e2e-k8s-service-lb-available-6677 svc/service-test Service started responding to GET requests over new connections
Sep 18 15:41:31.811 E ns/e2e-k8s-service-lb-available-6677 svc/service-test Service stopped responding to GET requests over new connections
Sep 18 15:41:32.811 - 1s    E ns/e2e-k8s-service-lb-available-6677 svc/service-test Service is not responding to GET requests over new connections
Sep 18 15:41:34.201 I ns/e2e-k8s-service-lb-available-6677 svc/service-test Service started responding to GET requests over new connections
Sep 18 15:47:01.811 E ns/e2e-k8s-service-lb-available-6677 svc/service-test Service stopped responding to GET requests on reused connections
Sep 18 15:47:01.983 I ns/e2e-k8s-service-lb-available-6677 svc/service-test Service started responding to GET requests on reused connections
Sep 18 15:50:45.811 E ns/e2e-k8s-service-lb-available-6677 svc/service-test Service stopped responding to GET requests on reused connections
Sep 18 15:50:45.981 I ns/e2e-k8s-service-lb-available-6677 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1600444979.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 50m1s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1s of 50m0s (0%):

Sep 18 15:52:14.164 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-9f2qg221-14c58.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 44.238.148.163:6443: connect: connection refused
Sep 18 15:52:14.998 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 18 15:52:15.126 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1600444979.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 55m30s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
98 error level events were detected during this test run:

Sep 18 15:08:37.233 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-149-199.us-west-2.compute.internal node/ip-10-0-149-199.us-west-2.compute.internal container/kube-controller-manager container exited with code 255 (Error): etadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/infrastructures?allowWatchBookmarks=true&resourceVersion=15839&timeout=9m53s&timeoutSeconds=593&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:08:36.439384       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=15834&timeout=6m36s&timeoutSeconds=396&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:08:36.440846       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/cloudcredential.openshift.io/v1/credentialsrequests?allowWatchBookmarks=true&resourceVersion=15834&timeout=6m57s&timeoutSeconds=417&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:08:36.442065       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PriorityClass: Get https://localhost:6443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=15572&timeout=7m40s&timeoutSeconds=460&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:08:36.442813       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/console.openshift.io/v1/consoleyamlsamples?allowWatchBookmarks=true&resourceVersion=15831&timeout=8m35s&timeoutSeconds=515&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0918 15:08:36.585397       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0918 15:08:36.585479       1 controllermanager.go:291] leaderelection lost\nI0918 15:08:36.614146       1 garbagecollector.go:147] Shutting down garbage collector controller\n
Sep 18 15:09:01.226 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-149-199.us-west-2.compute.internal node/ip-10-0-149-199.us-west-2.compute.internal container/setup init container exited with code 124 (Error): ................................................................................
Sep 18 15:09:06.329 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-149-199.us-west-2.compute.internal node/ip-10-0-149-199.us-west-2.compute.internal container/kube-controller-manager-recovery-controller container exited with code 255 (Error): /namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=19831&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:09:05.271872       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=19831&timeout=9m52s&timeoutSeconds=592&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:09:05.274606       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=19935&timeout=5m21s&timeoutSeconds=321&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:09:05.275892       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=20237&timeout=6m17s&timeoutSeconds=377&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:09:05.277668       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=19837&timeoutSeconds=506&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:09:05.292837       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=19935&timeout=9m25s&timeoutSeconds=565&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0918 15:09:05.293049       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0918 15:09:05.293094       1 leaderelection.go:67] leaderelection lost\n
Sep 18 15:13:24.358 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-5d4644cc96-j9d2s node/ip-10-0-149-199.us-west-2.compute.internal container/openshift-apiserver-operator container exited with code 255 (Error): generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-operator: observed generation is 1, desired generation is 2."\nI0918 14:56:20.223550       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"f2927d38-4120-41ed-9bc9-dd79f963179b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("")\nI0918 14:56:20.234267       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"f2927d38-4120-41ed-9bc9-dd79f963179b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable"\nI0918 14:58:00.008709       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"f2927d38-4120-41ed-9bc9-dd79f963179b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable" to ""\nW0918 15:02:41.639157       1 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\nI0918 15:13:23.208444       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 15:13:23.208629       1 prune_controller.go:204] Shutting down EncryptionPruneController\nF0918 15:13:23.208685       1 builder.go:210] server exited\n
Sep 18 15:17:32.959 E ns/openshift-insights pod/insights-operator-69b744ff66-nxnhl node/ip-10-0-164-108.us-west-2.compute.internal container/operator container exited with code 255 (Error): 40.333248       1 httplog.go:90] GET /metrics: (4.873236ms) 200 [Prometheus/2.15.2 10.128.2.11:33670]\nI0918 15:16:47.742366       1 httplog.go:90] GET /metrics: (1.493011ms) 200 [Prometheus/2.15.2 10.131.0.12:40668]\nI0918 15:16:59.873542       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0918 15:16:59.884812       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0918 15:17:00.306577       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 21198 (26690)\nI0918 15:17:00.306710       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 24588 (26690)\nI0918 15:17:01.306747       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0918 15:17:01.306862       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0918 15:17:10.334125       1 httplog.go:90] GET /metrics: (5.690729ms) 200 [Prometheus/2.15.2 10.128.2.11:33670]\nI0918 15:17:17.744051       1 httplog.go:90] GET /metrics: (2.677599ms) 200 [Prometheus/2.15.2 10.131.0.12:40668]\nI0918 15:17:32.449240       1 observer_polling.go:116] Observed file "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" has been modified (old="d868a7bea8e50af45f5e06d43db81cbee0915fd2d40b266711e45f7790277127", new="a62d0a5df31ccf266e0dfe3fce33c1131e55a7e785b17a773ccc614530c26762")\nW0918 15:17:32.449325       1 builder.go:101] Restart triggered because of file /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt was modified\nF0918 15:17:32.449392       1 start.go:80] stopped\n
Sep 18 15:18:45.827 E ns/openshift-kube-storage-version-migrator pod/migrator-d86659794-vhbj9 node/ip-10-0-184-151.us-west-2.compute.internal container/migrator container exited with code 2 (Error): I0918 15:05:32.965107       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Sep 18 15:20:13.612 E ns/openshift-cluster-machine-approver pod/machine-approver-c9cb694cd-d5r6b node/ip-10-0-149-199.us-west-2.compute.internal container/machine-approver-controller container exited with code 2 (Error):        1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0918 15:09:26.259965       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0918 15:09:27.260544       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0918 15:15:00.870716       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=21578&timeoutSeconds=329&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0918 15:15:01.871625       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0918 15:15:06.630417       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:openshift-cluster-machine-approver:machine-approver-sa" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope\n
Sep 18 15:20:20.352 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-248-134.us-west-2.compute.internal node/ip-10-0-248-134.us-west-2.compute.internal container/kube-scheduler container exited with code 255 (Error): true&resourceVersion=20890&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:20:18.317408       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=23776&timeout=6m48s&timeoutSeconds=408&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:20:18.319486       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=23371&timeout=5m9s&timeoutSeconds=309&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:20:18.321731       1 reflector.go:382] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=27487&timeout=9m22s&timeoutSeconds=562&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:20:18.322763       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=23776&timeout=9m1s&timeoutSeconds=541&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:20:18.324470       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=23776&timeout=8m40s&timeoutSeconds=520&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0918 15:20:19.278765       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0918 15:20:19.278796       1 server.go:244] leaderelection lost\n
Sep 18 15:20:45.728 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-248-134.us-west-2.compute.internal node/ip-10-0-248-134.us-west-2.compute.internal container/setup init container exited with code 124 (Error): ................................................................................
Sep 18 15:20:46.749 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-248-134.us-west-2.compute.internal node/ip-10-0-248-134.us-west-2.compute.internal container/cluster-policy-controller container exited with code 255 (Error): m36s&timeoutSeconds=456&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:20:45.356818       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.LimitRange: Get https://localhost:6443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=23041&timeout=8m59s&timeoutSeconds=539&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:20:45.358293       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ServiceAccount: Get https://localhost:6443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=28490&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:20:45.363182       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Job: Get https://localhost:6443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=22222&timeout=8m43s&timeoutSeconds=523&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:20:45.366738       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=27959&timeout=5m41s&timeoutSeconds=341&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 15:20:45.367868       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=23776&timeout=5m2s&timeoutSeconds=302&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0918 15:20:45.764422       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0918 15:20:45.764467       1 policy_controller.go:94] leaderelection lost\nI0918 15:20:45.768378       1 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-248-134 stopped leading\n
Sep 18 15:20:47.798 E ns/openshift-controller-manager pod/controller-manager-xnx8j node/ip-10-0-164-108.us-west-2.compute.internal container/controller-manager container exited with code 137 (Error): tch stream event decoding: unexpected EOF\nI0918 15:16:59.818359       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.818621       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.818833       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819054       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819316       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819585       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819748       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819835       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819918       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819928       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819934       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819939       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819948       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819955       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819966       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819969       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:16:59.819979       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Sep 18 15:20:58.919 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-5dfc8bf9b9-rlndv node/ip-10-0-149-199.us-west-2.compute.internal container/operator container exited with code 255 (Error): .566168ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0918 15:20:56.256382       1 request.go:565] Throttling request took 167.813049ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0918 15:20:56.317530       1 status_controller.go:176] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-18T14:51:29Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-18T15:20:16Z","message":"Progressing: daemonset/controller-manager: updated number scheduled is 2, desired number scheduled is 3","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-09-18T14:55:56Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-18T14:51:29Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0918 15:20:56.324304       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"d3a551f4-91ff-4da2-be2a-98d7f6b64f0c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "" to "Progressing: daemonset/controller-manager: updated number scheduled is 2, desired number scheduled is 3"\nI0918 15:20:57.946075       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 15:20:57.946601       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0918 15:20:57.946625       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0918 15:20:57.946766       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nF0918 15:20:57.946780       1 builder.go:243] stopped\n
Sep 18 15:21:01.116 E ns/openshift-monitoring pod/cluster-monitoring-operator-777b84c749-6w5wt node/ip-10-0-248-134.us-west-2.compute.internal container/kube-rbac-proxy container exited with code 255 (Error): I0918 15:21:00.534235       1 main.go:186] Valid token audiences: \nI0918 15:21:00.534426       1 main.go:248] Reading certificate files\nF0918 15:21:00.534472       1 main.go:252] Failed to initialize certificate reloader: error loading certificates: error loading certificate: open /etc/tls/private/tls.crt: no such file or directory\n
Sep 18 15:21:04.146 E ns/openshift-authentication pod/oauth-openshift-5cd684c548-bgq5z node/ip-10-0-149-199.us-west-2.compute.internal container/oauth-openshift container exited with code 1 (Error): Copying system trust bundle\ncp: cannot remove '/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem': Permission denied\n
Sep 18 15:21:08.169 E ns/openshift-monitoring pod/kube-state-metrics-7df89f9b97-wcvlt node/ip-10-0-184-151.us-west-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Sep 18 15:21:30.958 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-212-152.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/09/18 15:04:11 Watching directory: "/etc/alertmanager/config"\n
Sep 18 15:21:31.006 E ns/openshift-monitoring pod/thanos-querier-6f59d56d68-mw6rk node/ip-10-0-212-152.us-west-2.compute.internal container/oauth-proxy container exited with code 2 (Error): :56 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 15:03:56 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 15:03:56 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/18 15:03:56 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 15:03:56 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/18 15:03:56 http.go:107: HTTPS: listening on [::]:9091\nI0918 15:03:56.359095       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 15:04:47 oauthproxy.go:774: basicauth: 10.129.0.4:43688 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:05:47 oauthproxy.go:774: basicauth: 10.129.0.4:44716 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:06:47 oauthproxy.go:774: basicauth: 10.129.0.4:45254 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:10:47 oauthproxy.go:774: basicauth: 10.129.0.4:34528 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:13:27 oauthproxy.go:774: basicauth: 10.128.0.44:51452 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:15:26 oauthproxy.go:774: basicauth: 10.128.0.44:60916 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:15:26 oauthproxy.go:774: basicauth: 10.128.0.44:60916 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:16:26 oauthproxy.go:774: basicauth: 10.128.0.44:33478 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:16:26 oauthproxy.go:774: basicauth: 10.128.0.44:33478 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 18 15:21:32.155 E ns/openshift-controller-manager pod/controller-manager-h9j79 node/ip-10-0-149-199.us-west-2.compute.internal container/controller-manager container exited with code 137 (Error): I0918 15:20:58.219525       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (v0.0.0-alpha.0-111-gb28647ee)\nI0918 15:20:58.221475       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ocp/4.5-2020-09-18-103604@sha256:5e6ae1bbcb18194edb1c066c6f61b3a7e9cd45a156f6afc08e442afe4d9691f6"\nI0918 15:20:58.221494       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ocp/4.5-2020-09-18-103604@sha256:758196af8b69b6347ff6224bd461d825ab238cf27944b7f178a184c76d8dcabd"\nI0918 15:20:58.221585       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0918 15:20:58.221595       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Sep 18 15:22:18.903 E ns/openshift-console-operator pod/console-operator-7997c99c97-p2l85 node/ip-10-0-248-134.us-west-2.compute.internal container/console-operator container exited with code 255 (Error): nexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:20:09.214618       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:20:09.214631       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:20:09.214637       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:20:09.214649       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:20:09.214662       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:20:09.214673       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:20:09.214678       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:20:09.214689       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:20:09.214700       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0918 15:22:18.164791       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 15:22:18.165464       1 controller.go:70] Shutting down Console\nI0918 15:22:18.165516       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0918 15:22:18.165528       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0918 15:22:18.165559       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0918 15:22:18.165569       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0918 15:22:18.165581       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0918 15:22:18.165590       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0918 15:22:18.165601       1 status_controller.go:212] Shutting down StatusSyncer-console\nF0918 15:22:18.165762       1 builder.go:243] stopped\n
Sep 18 15:22:26.134 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-188-241.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-18T15:22:01.804Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-18T15:22:01.808Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-18T15:22:01.809Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-18T15:22:01.810Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-18T15:22:01.810Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-18T15:22:01.810Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-18T15:22:01.811Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-18T15:22:01.811Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-18T15:22:01.811Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-18T15:22:01.811Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-18T15:22:01.811Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-18T15:22:01.811Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-18T15:22:01.811Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-18T15:22:01.811Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-18T15:22:01.818Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-18T15:22:01.818Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-18
Sep 18 15:22:53.348 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-212-152.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-18T15:22:40.028Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-18T15:22:40.034Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-18T15:22:40.035Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-18T15:22:40.036Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-18T15:22:40.036Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-18T15:22:40.037Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-18T15:22:40.037Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-18T15:22:40.037Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-18T15:22:40.037Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-18T15:22:40.037Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-18T15:22:40.037Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-18T15:22:40.037Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-18T15:22:40.037Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-18T15:22:40.038Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-18T15:22:40.038Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-18T15:22:40.039Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-18
Sep 18 15:22:58.134 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-188-241.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/09/18 15:04:32 Watching directory: "/etc/alertmanager/config"\n
Sep 18 15:22:58.134 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-188-241.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/09/18 15:04:32 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 15:04:32 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 15:04:32 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 15:04:32 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/18 15:04:32 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 15:04:32 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 15:04:32 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0918 15:04:32.637866       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 15:04:32 http.go:107: HTTPS: listening on [::]:9095\n
Sep 18 15:23:25.793 E clusteroperator/monitoring changed Degraded to True: UpdatingkubeStateMetricsFailed: Failed to rollout the stack. Error: running task Updating kube-state-metrics failed: reconciling kube-state-metrics Deployment failed: creating Deployment object failed after update failed: object is being deleted: deployments.apps "kube-state-metrics" already exists
Sep 18 15:24:54.433 E ns/openshift-sdn pod/sdn-kx5b7 node/ip-10-0-188-241.us-west-2.compute.internal container/sdn container exited with code 255 (Error): nitoring/thanos-querier:tenancy to [10.131.0.32:9092]\nI0918 15:23:52.715167    2010 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-monitoring/thanos-querier:web to [10.131.0.32:9091]\nI0918 15:23:52.869406    2010 proxier.go:368] userspace proxy: processing 0 service events\nI0918 15:23:52.869445    2010 proxier.go:347] userspace syncProxyRules took 47.383671ms\nI0918 15:23:54.724423    2010 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-monitoring/thanos-querier:tenancy to [10.128.2.24:9092 10.131.0.32:9092]\nI0918 15:23:54.724472    2010 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-monitoring/thanos-querier:web to [10.128.2.24:9091 10.131.0.32:9091]\nI0918 15:23:54.851032    2010 proxier.go:368] userspace proxy: processing 0 service events\nI0918 15:23:54.851063    2010 proxier.go:347] userspace syncProxyRules took 30.309565ms\nI0918 15:24:24.971784    2010 proxier.go:368] userspace proxy: processing 0 service events\nI0918 15:24:24.971829    2010 proxier.go:347] userspace syncProxyRules took 27.098783ms\nI0918 15:24:41.944394    2010 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.16:6443 10.130.0.4:6443]\nI0918 15:24:41.944450    2010 roundrobin.go:217] Delete endpoint 10.128.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0918 15:24:41.944472    2010 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.16:8443 10.130.0.4:8443]\nI0918 15:24:41.944485    2010 roundrobin.go:217] Delete endpoint 10.128.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0918 15:24:42.072434    2010 proxier.go:368] userspace proxy: processing 0 service events\nI0918 15:24:42.072458    2010 proxier.go:347] userspace syncProxyRules took 27.457301ms\nF0918 15:24:54.297741    2010 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 18 15:25:01.459 E ns/openshift-sdn pod/sdn-controller-wg6ml node/ip-10-0-248-134.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): I0918 14:50:38.698029       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0918 14:54:59.918260       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-9f2qg221-14c58.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Sep 18 15:25:13.057 E ns/openshift-multus pod/multus-nkk2p node/ip-10-0-188-241.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Sep 18 15:25:24.428 E ns/openshift-sdn pod/sdn-vnw88 node/ip-10-0-184-151.us-west-2.compute.internal container/sdn container exited with code 255 (Error): eb to [10.131.0.32:9091]\nI0918 15:23:52.845915    2098 proxier.go:368] userspace proxy: processing 0 service events\nI0918 15:23:52.845942    2098 proxier.go:347] userspace syncProxyRules took 29.340073ms\nI0918 15:23:54.726839    2098 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-monitoring/thanos-querier:tenancy to [10.128.2.24:9092 10.131.0.32:9092]\nI0918 15:23:54.726886    2098 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-monitoring/thanos-querier:web to [10.128.2.24:9091 10.131.0.32:9091]\nI0918 15:23:54.855280    2098 proxier.go:368] userspace proxy: processing 0 service events\nI0918 15:23:54.855302    2098 proxier.go:347] userspace syncProxyRules took 29.419054ms\nI0918 15:24:24.981416    2098 proxier.go:368] userspace proxy: processing 0 service events\nI0918 15:24:24.981438    2098 proxier.go:347] userspace syncProxyRules took 28.824168ms\nI0918 15:24:41.945241    2098 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.16:6443 10.130.0.4:6443]\nI0918 15:24:41.945281    2098 roundrobin.go:217] Delete endpoint 10.128.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0918 15:24:41.945297    2098 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.16:8443 10.130.0.4:8443]\nI0918 15:24:41.945304    2098 roundrobin.go:217] Delete endpoint 10.128.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0918 15:24:42.090056    2098 proxier.go:368] userspace proxy: processing 0 service events\nI0918 15:24:42.090101    2098 proxier.go:347] userspace syncProxyRules took 45.99354ms\nI0918 15:25:12.229777    2098 proxier.go:368] userspace proxy: processing 0 service events\nI0918 15:25:12.229827    2098 proxier.go:347] userspace syncProxyRules took 36.368726ms\nF0918 15:25:23.952886    2098 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 18 15:25:58.545 E ns/openshift-multus pod/multus-admission-controller-f7w9p node/ip-10-0-164-108.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Sep 18 15:26:19.797 E ns/openshift-sdn pod/sdn-pnm6z node/ip-10-0-248-134.us-west-2.compute.internal container/sdn container exited with code 255 (Error): ice port "openshift-ingress-operator/metrics:metrics" at 172.30.121.65:9393/TCP\nI0918 15:25:52.103459  102805 service.go:379] Adding new service port "openshift-kube-scheduler/scheduler:https" at 172.30.168.85:443/TCP\nI0918 15:25:52.103470  102805 service.go:379] Adding new service port "openshift-console-operator/metrics:https" at 172.30.204.199:443/TCP\nI0918 15:25:52.103480  102805 service.go:379] Adding new service port "openshift-kube-apiserver-operator/metrics:https" at 172.30.115.217:443/TCP\nI0918 15:25:52.103491  102805 service.go:379] Adding new service port "e2e-k8s-service-lb-available-6677/service-test:" at 172.30.161.216:80/TCP\nI0918 15:25:52.103835  102805 proxier.go:813] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0918 15:25:52.178235  102805 proxier.go:370] userspace proxy: processing 0 service events\nI0918 15:25:52.179825  102805 proxier.go:349] userspace syncProxyRules took 78.098635ms\nI0918 15:25:52.188621  102805 proxier.go:370] userspace proxy: processing 0 service events\nI0918 15:25:52.189251  102805 proxier.go:349] userspace syncProxyRules took 87.356905ms\nI0918 15:25:52.228270  102805 proxier.go:1656] Opened local port "nodePort for openshift-ingress/router-default:http" (:31254/tcp)\nI0918 15:25:52.228364  102805 proxier.go:1656] Opened local port "nodePort for openshift-ingress/router-default:https" (:31360/tcp)\nI0918 15:25:52.228450  102805 proxier.go:1656] Opened local port "nodePort for e2e-k8s-service-lb-available-6677/service-test:" (:31389/tcp)\nI0918 15:25:52.250743  102805 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30544\nI0918 15:25:52.262166  102805 proxy.go:311] openshift-sdn proxy services and endpoints initialized\nI0918 15:25:52.262192  102805 cmd.go:172] openshift-sdn network plugin registering startup\nI0918 15:25:52.262288  102805 cmd.go:176] openshift-sdn network plugin ready\nF0918 15:26:19.162138  102805 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 18 15:26:26.961 E ns/openshift-multus pod/multus-2wbsq node/ip-10-0-212-152.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Sep 18 15:26:51.003 E ns/openshift-sdn pod/sdn-xk7jz node/ip-10-0-212-152.us-west-2.compute.internal container/sdn container exited with code 255 (Error):  15:25:27.523488   73400 roundrobin.go:217] Delete endpoint 10.130.0.4:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0918 15:25:27.637282   73400 proxier.go:370] userspace proxy: processing 0 service events\nI0918 15:25:27.637964   73400 proxier.go:349] userspace syncProxyRules took 28.802914ms\nI0918 15:25:27.768174   73400 proxier.go:370] userspace proxy: processing 0 service events\nI0918 15:25:27.768863   73400 proxier.go:349] userspace syncProxyRules took 29.989865ms\nI0918 15:26:20.424101   73400 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.73:6443 10.129.0.16:6443 10.130.0.90:6443]\nI0918 15:26:20.424151   73400 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.73:8443 10.129.0.16:8443 10.130.0.90:8443]\nI0918 15:26:20.438249   73400 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.73:6443 10.130.0.90:6443]\nI0918 15:26:20.438291   73400 roundrobin.go:217] Delete endpoint 10.129.0.16:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0918 15:26:20.438313   73400 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.73:8443 10.130.0.90:8443]\nI0918 15:26:20.438326   73400 roundrobin.go:217] Delete endpoint 10.129.0.16:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0918 15:26:20.553256   73400 proxier.go:370] userspace proxy: processing 0 service events\nI0918 15:26:20.553894   73400 proxier.go:349] userspace syncProxyRules took 31.086461ms\nI0918 15:26:20.681663   73400 proxier.go:370] userspace proxy: processing 0 service events\nI0918 15:26:20.682289   73400 proxier.go:349] userspace syncProxyRules took 31.064089ms\nF0918 15:26:50.258595   73400 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 18 15:26:51.186 E ns/openshift-multus pod/multus-admission-controller-27rlq node/ip-10-0-149-199.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Sep 18 15:27:19.350 E ns/openshift-sdn pod/sdn-wpmhm node/ip-10-0-149-199.us-west-2.compute.internal container/sdn container exited with code 255 (Error): ultus-admission-controller:metrics to [10.128.0.73:8443 10.129.0.16:8443 10.130.0.90:8443]\nI0918 15:26:20.446167  106097 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.73:6443 10.130.0.90:6443]\nI0918 15:26:20.446201  106097 roundrobin.go:217] Delete endpoint 10.129.0.16:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0918 15:26:20.446216  106097 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.73:8443 10.130.0.90:8443]\nI0918 15:26:20.446225  106097 roundrobin.go:217] Delete endpoint 10.129.0.16:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0918 15:26:20.603888  106097 proxier.go:370] userspace proxy: processing 0 service events\nI0918 15:26:20.604373  106097 proxier.go:349] userspace syncProxyRules took 30.108807ms\nI0918 15:26:20.716996  106097 proxier.go:370] userspace proxy: processing 0 service events\nI0918 15:26:20.717545  106097 proxier.go:349] userspace syncProxyRules took 29.141937ms\nI0918 15:26:50.767565  106097 pod.go:541] CNI_DEL openshift-multus/multus-admission-controller-27rlq\nI0918 15:26:52.663635  106097 pod.go:505] CNI_ADD openshift-multus/multus-admission-controller-7rq5v got IP 10.129.0.84, ofport 85\nI0918 15:27:00.198141  106097 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.73:6443 10.129.0.84:6443 10.130.0.90:6443]\nI0918 15:27:00.198169  106097 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.73:8443 10.129.0.84:8443 10.130.0.90:8443]\nI0918 15:27:00.310918  106097 proxier.go:370] userspace proxy: processing 0 service events\nI0918 15:27:00.311411  106097 proxier.go:349] userspace syncProxyRules took 24.52989ms\nF0918 15:27:18.662663  106097 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 18 15:27:33.879 E ns/openshift-multus pod/multus-cx94j node/ip-10-0-184-151.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Sep 18 15:28:47.602 E ns/openshift-multus pod/multus-762b9 node/ip-10-0-149-199.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Sep 18 15:31:04.573 E ns/openshift-multus pod/multus-v6t87 node/ip-10-0-248-134.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Sep 18 15:31:52.377 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-dns-operator/dns-operator" (474 of 586)
Sep 18 15:35:22.753 E ns/openshift-machine-config-operator pod/machine-config-daemon-qwqbz node/ip-10-0-184-151.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Sep 18 15:35:41.784 E ns/openshift-machine-config-operator pod/machine-config-daemon-mrpnt node/ip-10-0-149-199.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Sep 18 15:35:57.408 E ns/openshift-machine-config-operator pod/machine-config-daemon-t89rc node/ip-10-0-212-152.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Sep 18 15:36:20.522 E ns/openshift-machine-config-operator pod/machine-config-daemon-wpbhg node/ip-10-0-188-241.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Sep 18 15:36:40.507 E ns/openshift-machine-config-operator pod/machine-config-daemon-lxds9 node/ip-10-0-248-134.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Sep 18 15:36:58.536 E ns/openshift-machine-config-operator pod/machine-config-daemon-tgfbh node/ip-10-0-164-108.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Sep 18 15:39:34.048 E ns/openshift-machine-config-operator pod/machine-config-server-bhl2j node/ip-10-0-248-134.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0918 14:52:08.931257       1 start.go:38] Version: v4.4.0-202009102017.p0-dirty (b5ab0fbd69c492007e09455099fd3f3a79d3433b)\nI0918 14:52:08.931874       1 api.go:56] Launching server on :22624\nI0918 14:52:08.932180       1 api.go:56] Launching server on :22623\n
Sep 18 15:39:39.375 E ns/openshift-marketplace pod/redhat-marketplace-f9566795d-49gr7 node/ip-10-0-184-151.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Sep 18 15:39:40.608 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-6774lmrzj node/ip-10-0-149-199.us-west-2.compute.internal container/operator container exited with code 255 (Error): :418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.ConfigMap total 1 items received\nI0918 15:37:32.157541       1 httplog.go:90] GET /metrics: (5.078588ms) 200 [Prometheus/2.15.2 10.128.2.20:49270]\nI0918 15:37:32.335646       1 httplog.go:90] GET /metrics: (1.5891ms) 200 [Prometheus/2.15.2 10.131.0.31:42428]\nI0918 15:37:38.554708       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.ServiceCatalogControllerManager total 0 items received\nI0918 15:37:53.447394       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.Deployment total 1 items received\nI0918 15:37:58.466323       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.ConfigMap total 1 items received\nI0918 15:38:02.158242       1 httplog.go:90] GET /metrics: (5.753121ms) 200 [Prometheus/2.15.2 10.128.2.20:49270]\nI0918 15:38:02.335661       1 httplog.go:90] GET /metrics: (1.627308ms) 200 [Prometheus/2.15.2 10.131.0.31:42428]\nI0918 15:38:32.158034       1 httplog.go:90] GET /metrics: (5.49925ms) 200 [Prometheus/2.15.2 10.128.2.20:49270]\nI0918 15:38:32.335861       1 httplog.go:90] GET /metrics: (1.824772ms) 200 [Prometheus/2.15.2 10.131.0.31:42428]\nI0918 15:39:02.158715       1 httplog.go:90] GET /metrics: (5.916901ms) 200 [Prometheus/2.15.2 10.128.2.20:49270]\nI0918 15:39:02.335700       1 httplog.go:90] GET /metrics: (1.553925ms) 200 [Prometheus/2.15.2 10.131.0.31:42428]\nI0918 15:39:27.464481       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.Service total 0 items received\nI0918 15:39:32.158045       1 httplog.go:90] GET /metrics: (5.365886ms) 200 [Prometheus/2.15.2 10.128.2.20:49270]\nI0918 15:39:32.335714       1 httplog.go:90] GET /metrics: (1.648704ms) 200 [Prometheus/2.15.2 10.131.0.31:42428]\nI0918 15:39:39.357090       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0918 15:39:39.357296       1 builder.go:210] server exited\n
Sep 18 15:39:41.599 E ns/openshift-cluster-machine-approver pod/machine-approver-79b69967d8-6qbrg node/ip-10-0-149-199.us-west-2.compute.internal container/machine-approver-controller container exited with code 2 (Error): e%3Dmachine-approver&resourceVersion=30915&timeoutSeconds=431&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0918 15:23:30.381392       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=24866&timeoutSeconds=499&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0918 15:23:31.381830       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=30915&timeoutSeconds=427&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0918 15:23:31.383030       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=24866&timeoutSeconds=506&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0918 15:23:32.382272       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=30915&timeoutSeconds=319&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0918 15:23:32.383394       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=24866&timeoutSeconds=346&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
Sep 18 15:41:42.681 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Sep 18 15:42:09.433 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 18 15:43:41.652 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-149-199.us-west-2.compute.internal" not ready since 2020-09-18 15:42:08 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-149-199.us-west-2.compute.internal is unhealthy
Sep 18 15:43:42.232 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-188-241.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error):  msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-18T15:22:11.009973479Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-18T15:22:16.010052262Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-18T15:22:21.010081758Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-18T15:22:26.011669063Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-18T15:22:31.188851242Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-18T15:22:31.188942404Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-09-18T15:23:42.359064656Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheu
Sep 18 15:43:42.232 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-188-241.us-west-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/18 15:22:10 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/09/18 15:22:26 config map updated\n2020/09/18 15:22:26 error: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused\n2020/09/18 15:24:51 config map updated\n2020/09/18 15:24:51 successfully triggered reload\n
Sep 18 15:43:42.232 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-188-241.us-west-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/09/18 15:22:20 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/18 15:22:20 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 15:22:20 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 15:22:20 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 15:22:20 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 15:22:20 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/18 15:22:20 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 15:22:20 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/18 15:22:20 http.go:107: HTTPS: listening on [::]:9091\nI0918 15:22:20.845124       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 15:37:51 oauthproxy.go:774: basicauth: 10.130.0.81:57050 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:37:51 oauthproxy.go:774: basicauth: 10.130.0.81:57050 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 18 15:43:42.267 E ns/openshift-monitoring pod/grafana-6fd586fdcb-mlfxf node/ip-10-0-188-241.us-west-2.compute.internal container/grafana container exited with code 1 (Error): 
Sep 18 15:43:42.267 E ns/openshift-monitoring pod/grafana-6fd586fdcb-mlfxf node/ip-10-0-188-241.us-west-2.compute.internal container/grafana-proxy container exited with code 2 (Error): 
Sep 18 15:43:42.294 E ns/openshift-monitoring pod/prometheus-adapter-864d477b6d-kkck4 node/ip-10-0-188-241.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0918 15:21:47.083187       1 adapter.go:94] successfully using in-cluster auth\nI0918 15:21:56.725836       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0918 15:21:56.725837       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0918 15:21:56.726042       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0918 15:21:56.727204       1 secure_serving.go:178] Serving securely on [::]:6443\nI0918 15:21:56.727397       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Sep 18 15:43:43.312 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-188-241.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/09/18 15:39:58 Watching directory: "/etc/alertmanager/config"\n
Sep 18 15:43:43.312 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-188-241.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/09/18 15:39:59 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 15:39:59 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 15:39:59 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 15:39:59 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/18 15:39:59 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 15:39:59 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 15:39:59 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 15:39:59 http.go:107: HTTPS: listening on [::]:9095\nI0918 15:39:59.183610       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 18 15:43:43.430 E ns/openshift-monitoring pod/kube-state-metrics-67654b5b96-vssvg node/ip-10-0-188-241.us-west-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Sep 18 15:44:11.404 E ns/e2e-k8s-sig-apps-job-upgrade-6909 pod/foo-msqld node/ip-10-0-188-241.us-west-2.compute.internal container/c container exited with code 137 (Error): 
Sep 18 15:44:11.498 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-184-151.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-18T15:44:04.174Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-18T15:44:04.181Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-18T15:44:04.181Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-18T15:44:04.182Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-18T15:44:04.182Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-18T15:44:04.182Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-18T15:44:04.182Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-18T15:44:04.182Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-18T15:44:04.182Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-18T15:44:04.182Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-18T15:44:04.182Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-18T15:44:04.182Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-18T15:44:04.182Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-18T15:44:04.182Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-18T15:44:04.210Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-18T15:44:04.210Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-18
Sep 18 15:44:26.437 E ns/e2e-k8s-service-lb-available-6677 pod/service-test-8frvx node/ip-10-0-188-241.us-west-2.compute.internal container/netexec container exited with code 2 (Error): 
Sep 18 15:44:43.552 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-6996bf7fc7-48cmz node/ip-10-0-248-134.us-west-2.compute.internal container/operator container exited with code 255 (Error): 1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0918 15:44:15.967922       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0918 15:44:15.969670       1 httplog.go:90] GET /metrics: (5.547586ms) 200 [Prometheus/2.15.2 10.131.0.31:50754]\nI0918 15:44:17.964693       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0918 15:44:17.964712       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0918 15:44:17.966699       1 httplog.go:90] GET /metrics: (2.107719ms) 200 [Prometheus/2.15.2 10.129.2.17:55898]\nI0918 15:44:25.654127       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 15:44:35.471368       1 workload_controller.go:347] No service bindings found, nothing to delete.\nI0918 15:44:35.539898       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0918 15:44:35.742988       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 15:44:41.476660       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 15:44:41.476980       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0918 15:44:41.477102       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0918 15:44:41.477215       1 finalizer_controller.go:140] Shutting down FinalizerController\nI0918 15:44:41.477232       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-apiserver\nI0918 15:44:41.477245       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0918 15:44:41.478106       1 workload_controller.go:254] Shutting down OpenShiftSvCatAPIServerOperator\nF0918 15:44:41.478212       1 builder.go:243] stopped\n
Sep 18 15:45:52.378 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Sep 18 15:45:54.118 E ns/openshift-marketplace pod/redhat-marketplace-f9566795d-bf95w node/ip-10-0-212-152.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Sep 18 15:46:52.245 E ns/openshift-marketplace pod/community-operators-586f9d55bf-gltxx node/ip-10-0-212-152.us-west-2.compute.internal container/community-operators container exited with code 2 (Error): 
Sep 18 15:47:32.049 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 18 15:48:30.627 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-212-152.us-west-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/18 15:22:46 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/09/18 15:25:11 config map updated\n2020/09/18 15:25:11 successfully triggered reload\n
Sep 18 15:48:30.627 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-212-152.us-west-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/09/18 15:22:52 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/18 15:22:52 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 15:22:52 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 15:22:52 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 15:22:52 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 15:22:52 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/18 15:22:52 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 15:22:52 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/18 15:22:52 http.go:107: HTTPS: listening on [::]:9091\nI0918 15:22:52.491951       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 15:26:19 oauthproxy.go:774: basicauth: 10.131.0.28:59240 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:30:55 oauthproxy.go:774: basicauth: 10.131.0.28:36214 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:35:25 oauthproxy.go:774: basicauth: 10.131.0.28:41456 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:39:56 oauthproxy.go:774: basicauth: 10.131.0.28:46474 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:44:2
Sep 18 15:48:30.627 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-212-152.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-18T15:22:46.212217967Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-09-18T15:22:46.21493608Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-18T15:22:51.214966217Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-18T15:22:56.390451129Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-18T15:22:56.392847311Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-09-18T15:22:56.569298989Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-18T15:25:56.54920054Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-18T15:34:56.546672798Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 18 15:48:30.682 E ns/openshift-monitoring pod/prometheus-adapter-864d477b6d-5ddpq node/ip-10-0-212-152.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0918 15:39:45.770290       1 adapter.go:94] successfully using in-cluster auth\nI0918 15:39:55.855118       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0918 15:39:55.855146       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0918 15:39:55.855515       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0918 15:39:55.856340       1 secure_serving.go:178] Serving securely on [::]:6443\nI0918 15:39:55.856548       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nW0918 15:41:00.425337       1 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\nW0918 15:41:00.425412       1 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\n
Sep 18 15:48:30.830 E ns/openshift-monitoring pod/telemeter-client-5cbfbb8bb4-j772b node/ip-10-0-212-152.us-west-2.compute.internal container/reload container exited with code 2 (Error): 
Sep 18 15:48:30.830 E ns/openshift-monitoring pod/telemeter-client-5cbfbb8bb4-j772b node/ip-10-0-212-152.us-west-2.compute.internal container/telemeter-client container exited with code 2 (Error): 
Sep 18 15:48:31.662 E ns/openshift-monitoring pod/thanos-querier-6d7fd7d8bd-2585m node/ip-10-0-212-152.us-west-2.compute.internal container/oauth-proxy container exited with code 2 (Error): roxy.go:774: basicauth: 10.128.0.44:51070 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:25:36 oauthproxy.go:774: basicauth: 10.128.0.44:52314 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:26:26 oauthproxy.go:774: basicauth: 10.128.0.44:52912 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:27:26 oauthproxy.go:774: basicauth: 10.128.0.44:53634 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:29:26 oauthproxy.go:774: basicauth: 10.128.0.44:54956 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:31:26 oauthproxy.go:774: basicauth: 10.128.0.44:57562 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:32:26 oauthproxy.go:774: basicauth: 10.128.0.44:58366 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:35:26 oauthproxy.go:774: basicauth: 10.128.0.44:60622 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:36:26 oauthproxy.go:774: basicauth: 10.128.0.44:33136 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:39:26 oauthproxy.go:774: basicauth: 10.128.0.44:35214 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:40:26 oauthproxy.go:774: basicauth: 10.128.0.44:36582 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:42:26 oauthproxy.go:774: basicauth: 10.128.0.44:34366 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:45:37 oauthproxy.go:774: basicauth: 10.130.0.103:56024 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 15:46:47 oauthproxy.go:774: basicauth: 10.130.0.103:37868 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 18 15:48:45.743 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-188-241.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-18T15:48:38.142Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-18T15:48:38.147Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-18T15:48:38.169Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-18T15:48:38.170Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-18T15:48:38.170Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-18T15:48:38.170Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-18T15:48:38.170Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-18T15:48:38.170Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-18T15:48:38.170Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-18T15:48:38.170Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-18T15:48:38.170Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-18T15:48:38.170Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-18T15:48:38.170Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-18T15:48:38.170Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-18T15:48:38.170Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=info ts=2020-09-18T15:48:38.170Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=error ts=2020-09-18
Sep 18 15:48:52.107 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: 2 of 3 members are available, ip-10-0-248-134.us-west-2.compute.internal is unhealthy
Sep 18 15:48:58.723 E ns/e2e-k8s-sig-apps-job-upgrade-6909 pod/foo-wlsq5 node/ip-10-0-212-152.us-west-2.compute.internal container/c container exited with code 137 (Error): 
Sep 18 15:48:58.742 E ns/e2e-k8s-sig-apps-job-upgrade-6909 pod/foo-gwm8x node/ip-10-0-212-152.us-west-2.compute.internal container/c container exited with code 137 (Error): 
Sep 18 15:49:17.599 E ns/openshift-cluster-machine-approver pod/machine-approver-79b69967d8-t9f8k node/ip-10-0-164-108.us-west-2.compute.internal container/machine-approver-controller container exited with code 2 (Error): arting cluster operator status controller\nI0918 15:44:52.227603       1 reflector.go:175] Starting reflector *v1beta1.CertificateSigningRequest (0s) from github.com/openshift/cluster-machine-approver/main.go:239\nI0918 15:44:52.227936       1 reflector.go:175] Starting reflector *v1.ClusterOperator (0s) from github.com/openshift/cluster-machine-approver/status.go:98\nI0918 15:44:52.328322       1 main.go:147] CSR csr-t8q22 added\nI0918 15:44:52.329837       1 main.go:150] CSR csr-t8q22 is already approved\nI0918 15:44:52.329946       1 main.go:147] CSR csr-vs4bn added\nI0918 15:44:52.329982       1 main.go:150] CSR csr-vs4bn is already approved\nI0918 15:44:52.330035       1 main.go:147] CSR csr-xq8kn added\nI0918 15:44:52.330067       1 main.go:150] CSR csr-xq8kn is already approved\nI0918 15:44:52.330111       1 main.go:147] CSR csr-5vsj4 added\nI0918 15:44:52.330138       1 main.go:150] CSR csr-5vsj4 is already approved\nI0918 15:44:52.330184       1 main.go:147] CSR csr-p9868 added\nI0918 15:44:52.330209       1 main.go:150] CSR csr-p9868 is already approved\nI0918 15:44:52.330234       1 main.go:147] CSR csr-q2mcz added\nI0918 15:44:52.330285       1 main.go:150] CSR csr-q2mcz is already approved\nI0918 15:44:52.330313       1 main.go:147] CSR csr-qtw9h added\nI0918 15:44:52.330338       1 main.go:150] CSR csr-qtw9h is already approved\nI0918 15:44:52.330382       1 main.go:147] CSR csr-x5xr5 added\nI0918 15:44:52.330409       1 main.go:150] CSR csr-x5xr5 is already approved\nI0918 15:44:52.330435       1 main.go:147] CSR csr-zs75x added\nI0918 15:44:52.330480       1 main.go:150] CSR csr-zs75x is already approved\nI0918 15:44:52.330507       1 main.go:147] CSR csr-4kqcj added\nI0918 15:44:52.330551       1 main.go:150] CSR csr-4kqcj is already approved\nI0918 15:44:52.330583       1 main.go:147] CSR csr-cxvxm added\nI0918 15:44:52.330633       1 main.go:150] CSR csr-cxvxm is already approved\nI0918 15:44:52.330664       1 main.go:147] CSR csr-q6tg7 added\nI0918 15:44:52.330708       1 main.go:150] CSR csr-q6tg7 is already approved\n
Sep 18 15:49:19.108 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-8f7c475d8-57lb6 node/ip-10-0-164-108.us-west-2.compute.internal container/operator container exited with code 1 (Error): 5:49:13.775261       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 15:49:13.775492       1 reflector.go:181] Stopping reflector *v1.ServiceAccount (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0918 15:49:13.775544       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController\nI0918 15:49:13.775586       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0918 15:49:13.775618       1 base_controller.go:101] Shutting down ConfigObserver ...\nI0918 15:49:13.775646       1 base_controller.go:101] Shutting down StatusSyncer_openshift-controller-manager ...\nI0918 15:49:13.775669       1 builder.go:219] server exited\nI0918 15:49:13.775694       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0918 15:49:13.775736       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0918 15:49:13.775752       1 reflector.go:181] Stopping reflector *v1.Namespace (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0918 15:49:13.775777       1 reflector.go:181] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0918 15:49:13.775807       1 reflector.go:181] Stopping reflector *v1.Service (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0918 15:49:13.775814       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0918 15:49:13.775858       1 reflector.go:181] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0918 15:49:13.775881       1 operator.go:141] Shutting down OpenShiftControllerManagerOperator\nW0918 15:49:13.775985       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\n
Sep 18 15:49:23.657 E ns/openshift-machine-config-operator pod/machine-config-operator-6ccc76d79c-tmhps node/ip-10-0-164-108.us-west-2.compute.internal container/machine-config-operator container exited with code 2 (Error): I0918 15:44:58.042677       1 start.go:46] Version: 4.5.0-0.ci-2020-09-18-103604 (Raw: machine-config-daemon-4.5.0-202006231303.p0-40-g08aad192-dirty, Hash: 08aad1925d6e29266390ecb6f4e6730d60e44aaf)\nI0918 15:44:58.046424       1 leaderelection.go:242] attempting to acquire leader lease  openshift-machine-config-operator/machine-config...\nE0918 15:46:55.874749       1 event.go:316] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"1540d005-4822-4847-b065-47de6a7536be", ResourceVersion:"46009", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736037485, loc:(*time.Location)(0x25205a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-6ccc76d79c-tmhps_761f6e90-bede-4876-b8b1-e5a220181bac\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-09-18T15:46:55Z\",\"renewTime\":\"2020-09-18T15:46:55Z\",\"leaderTransitions\":2}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Immutable:(*bool)(nil), Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-6ccc76d79c-tmhps_761f6e90-bede-4876-b8b1-e5a220181bac became leader'\nI0918 15:46:55.874858       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0918 15:46:56.292549       1 operator.go:265] Starting MachineConfigOperator\n
Sep 18 15:49:25.085 E ns/openshift-machine-api pod/machine-api-operator-697497c679-dxw4s node/ip-10-0-164-108.us-west-2.compute.internal container/machine-api-operator container exited with code 2 (Error): 
Sep 18 15:49:25.544 E ns/openshift-machine-config-operator pod/machine-config-controller-b5c9fd6b5-ntnsp node/ip-10-0-164-108.us-west-2.compute.internal container/machine-config-controller container exited with code 2 (Error): 10-0-212-152.us-west-2.compute.internal to desired config rendered-worker-84d40751915d2cb92c9dec5503444103\nI0918 15:48:25.614173       1 node_controller.go:453] Pool worker: node ip-10-0-212-152.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-84d40751915d2cb92c9dec5503444103\nI0918 15:48:26.624756       1 node_controller.go:453] Pool worker: node ip-10-0-212-152.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0918 15:48:27.371926       1 node_controller.go:434] Pool worker: node ip-10-0-212-152.us-west-2.compute.internal is now reporting unready: node ip-10-0-212-152.us-west-2.compute.internal is reporting Unschedulable\nI0918 15:48:28.082693       1 node_controller.go:434] Pool master: node ip-10-0-248-134.us-west-2.compute.internal is now reporting unready: node ip-10-0-248-134.us-west-2.compute.internal is reporting Unschedulable\nI0918 15:49:04.573086       1 node_controller.go:443] Pool master: node ip-10-0-248-134.us-west-2.compute.internal has completed update to rendered-master-207540d9c532c29bad20a3543380e541\nI0918 15:49:04.581738       1 node_controller.go:436] Pool master: node ip-10-0-248-134.us-west-2.compute.internal is now reporting ready\nI0918 15:49:09.573345       1 node_controller.go:759] Setting node ip-10-0-164-108.us-west-2.compute.internal to desired config rendered-master-207540d9c532c29bad20a3543380e541\nI0918 15:49:09.592073       1 node_controller.go:453] Pool master: node ip-10-0-164-108.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-207540d9c532c29bad20a3543380e541\nI0918 15:49:10.603641       1 node_controller.go:453] Pool master: node ip-10-0-164-108.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0918 15:49:11.694582       1 node_controller.go:434] Pool master: node ip-10-0-164-108.us-west-2.compute.internal is now reporting unready: node ip-10-0-164-108.us-west-2.compute.internal is reporting Unschedulable\n
Sep 18 15:49:26.118 E ns/openshift-machine-api pod/machine-api-controllers-696fc7b677-m2v8l node/ip-10-0-164-108.us-west-2.compute.internal container/machineset-controller container exited with code 1 (Error): 
Sep 18 15:49:26.157 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-7bc75fb8cb-mtzjc node/ip-10-0-164-108.us-west-2.compute.internal container/kube-storage-version-migrator-operator container exited with code 1 (Error): ator: no replicas are available")\nI0918 15:48:45.182000       1 status_controller.go:172] clusteroperator/kube-storage-version-migrator diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-18T14:51:28Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-18T15:18:29Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-18T15:48:45Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-18T14:51:27Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0918 15:48:45.193731       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"1ab06d39-b92d-4933-bc16-78cc6a1d7610", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0918 15:49:20.338626       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 15:49:20.339040       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from runtime/asm_amd64.s:1357\nI0918 15:49:20.339124       1 reflector.go:181] Stopping reflector *unstructured.Unstructured (12h0m0s) from runtime/asm_amd64.s:1357\nI0918 15:49:20.339176       1 reflector.go:181] Stopping reflector *v1.Deployment (10m0s) from runtime/asm_amd64.s:1357\nI0918 15:49:20.339238       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from runtime/asm_amd64.s:1357\nI0918 15:49:20.339304       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nI0918 15:49:20.339340       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0918 15:49:20.339372       1 base_controller.go:101] Shutting down LoggingSyncer ...\nW0918 15:49:20.339450       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
Sep 18 15:49:27.499 E ns/openshift-insights pod/insights-operator-66c8898867-m4s6s node/ip-10-0-164-108.us-west-2.compute.internal container/operator container exited with code 2 (Error): server.go:68] Refreshing configuration from cluster pull secret\nI0918 15:45:41.209483       1 configobserver.go:93] Found cloud.openshift.com token\nI0918 15:45:41.209505       1 configobserver.go:110] Refreshing configuration from cluster secret\nI0918 15:45:53.791691       1 httplog.go:90] GET /metrics: (9.766641ms) 200 [Prometheus/2.15.2 10.129.2.17:59358]\nI0918 15:46:00.440398       1 httplog.go:90] GET /metrics: (1.544843ms) 200 [Prometheus/2.15.2 10.131.0.31:55226]\nI0918 15:46:23.792183       1 httplog.go:90] GET /metrics: (10.674849ms) 200 [Prometheus/2.15.2 10.129.2.17:59358]\nI0918 15:46:30.440445       1 httplog.go:90] GET /metrics: (1.648808ms) 200 [Prometheus/2.15.2 10.131.0.31:55226]\nI0918 15:46:41.173230       1 status.go:314] The operator is healthy\nI0918 15:46:53.794368       1 httplog.go:90] GET /metrics: (12.871465ms) 200 [Prometheus/2.15.2 10.129.2.17:59358]\nI0918 15:47:00.442611       1 httplog.go:90] GET /metrics: (1.466003ms) 200 [Prometheus/2.15.2 10.131.0.31:55226]\nI0918 15:47:17.452616       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 1 items received\nI0918 15:47:23.787941       1 httplog.go:90] GET /metrics: (6.503535ms) 200 [Prometheus/2.15.2 10.129.2.17:59358]\nI0918 15:47:30.464372       1 httplog.go:90] GET /metrics: (25.520744ms) 200 [Prometheus/2.15.2 10.131.0.31:55226]\nI0918 15:47:53.788846       1 httplog.go:90] GET /metrics: (7.278188ms) 200 [Prometheus/2.15.2 10.129.2.17:59358]\nI0918 15:48:00.440448       1 httplog.go:90] GET /metrics: (1.550912ms) 200 [Prometheus/2.15.2 10.131.0.31:55226]\nI0918 15:48:23.789789       1 httplog.go:90] GET /metrics: (8.267196ms) 200 [Prometheus/2.15.2 10.129.2.17:59358]\nI0918 15:48:41.172493       1 status.go:314] The operator is healthy\nI0918 15:48:53.791423       1 httplog.go:90] GET /metrics: (9.846139ms) 200 [Prometheus/2.15.2 10.129.2.17:59358]\nI0918 15:49:00.444757       1 httplog.go:90] GET /metrics: (1.53371ms) 200 [Prometheus/2.15.2 10.128.2.17:58368]\n
Sep 18 15:49:29.192 E ns/openshift-service-ca-operator pod/service-ca-operator-847f9779f9-6wp6v node/ip-10-0-164-108.us-west-2.compute.internal container/operator container exited with code 1 (Error): 
Sep 18 15:49:30.376 E ns/openshift-service-ca pod/service-ca-8d979f96-hk4vk node/ip-10-0-164-108.us-west-2.compute.internal container/service-ca-controller container exited with code 1 (Error): 
Sep 18 15:49:30.445 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-6996bf7fc7-czrgd node/ip-10-0-164-108.us-west-2.compute.internal container/operator container exited with code 255 (Error):  found, nothing to delete.\nI0918 15:48:53.073041       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0918 15:48:53.394345       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 15:49:03.402810       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 15:49:03.881820       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0918 15:49:03.881853       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0918 15:49:03.883808       1 httplog.go:90] GET /metrics: (6.947996ms) 200 [Prometheus/2.15.2 10.129.2.17:59688]\nI0918 15:49:07.220607       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0918 15:49:07.220627       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0918 15:49:07.222333       1 httplog.go:90] GET /metrics: (1.80646ms) 200 [Prometheus/2.15.2 10.128.2.17:50836]\nI0918 15:49:13.088115       1 workload_controller.go:347] No service bindings found, nothing to delete.\nI0918 15:49:13.114360       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0918 15:49:13.425102       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 15:49:23.487009       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 15:49:27.053369       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 15:49:27.053865       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0918 15:49:27.058022       1 builder.go:209] server exited\n
Sep 18 15:49:31.548 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-67745lz4r node/ip-10-0-164-108.us-west-2.compute.internal container/operator container exited with code 255 (Error): tor.go:105\nI0918 15:46:22.107790       1 reflector.go:185] Listing and watching *v1.ServiceCatalogControllerManager from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0918 15:46:22.540823       1 reflector.go:185] Listing and watching *v1.ServiceAccount from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0918 15:46:29.630405       1 httplog.go:90] GET /metrics: (5.45932ms) 200 [Prometheus/2.15.2 10.129.2.17:49776]\nI0918 15:46:44.267453       1 httplog.go:90] GET /metrics: (6.630946ms) 200 [Prometheus/2.15.2 10.131.0.31:37946]\nI0918 15:46:59.630478       1 httplog.go:90] GET /metrics: (5.542616ms) 200 [Prometheus/2.15.2 10.129.2.17:49776]\nI0918 15:47:14.267622       1 httplog.go:90] GET /metrics: (6.855945ms) 200 [Prometheus/2.15.2 10.131.0.31:37946]\nI0918 15:47:29.633626       1 httplog.go:90] GET /metrics: (8.641283ms) 200 [Prometheus/2.15.2 10.129.2.17:49776]\nI0918 15:47:44.266372       1 httplog.go:90] GET /metrics: (5.552129ms) 200 [Prometheus/2.15.2 10.131.0.31:37946]\nI0918 15:47:59.630193       1 httplog.go:90] GET /metrics: (5.26031ms) 200 [Prometheus/2.15.2 10.129.2.17:49776]\nI0918 15:48:14.267034       1 httplog.go:90] GET /metrics: (6.215207ms) 200 [Prometheus/2.15.2 10.131.0.31:37946]\nI0918 15:48:29.630240       1 httplog.go:90] GET /metrics: (5.316323ms) 200 [Prometheus/2.15.2 10.129.2.17:49776]\nI0918 15:48:59.630491       1 httplog.go:90] GET /metrics: (5.545779ms) 200 [Prometheus/2.15.2 10.129.2.17:49776]\nI0918 15:49:14.312162       1 httplog.go:90] GET /metrics: (41.158469ms) 200 [Prometheus/2.15.2 10.128.2.17:57488]\nI0918 15:49:27.173393       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 15:49:27.173750       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0918 15:49:27.176289       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-controller-manager\nI0918 15:49:27.176319       1 operator.go:227] Shutting down ServiceCatalogControllerManagerOperator\nF0918 15:49:27.176449       1 builder.go:243] stopped\n
Sep 18 15:49:41.011 E ns/openshift-console pod/console-66cb6d454c-9fpj2 node/ip-10-0-164-108.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-09-18T15:39:48Z cmd/main: cookies are secure!\n2020-09-18T15:39:48Z cmd/main: Binding to [::]:8443...\n2020-09-18T15:39:48Z cmd/main: using TLS\n2020-09-18T15:41:06Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-09-18T15:41:06Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: dial tcp 172.30.0.1:443: connect: connection refused\n
Sep 18 15:50:09.929 E ns/openshift-monitoring pod/prometheus-operator-d4c5fc854-hbjrc node/ip-10-0-248-134.us-west-2.compute.internal container/prometheus-operator container exited with code 1 (Error): ts=2020-09-18T15:50:06.810326063Z caller=main.go:221 msg="Starting Prometheus Operator version '0.38.1'."\nts=2020-09-18T15:50:06.970577842Z caller=main.go:105 msg="Starting insecure server on [::]:8080"\nlevel=info ts=2020-09-18T15:50:06.981992351Z caller=operator.go:294 component=thanosoperator msg="connection established" cluster-version=v1.18.3\nlevel=info ts=2020-09-18T15:50:07.013520147Z caller=operator.go:214 component=alertmanageroperator msg="connection established" cluster-version=v1.18.3\nlevel=info ts=2020-09-18T15:50:07.0301407Z caller=operator.go:454 component=prometheusoperator msg="connection established" cluster-version=v1.18.3\nts=2020-09-18T15:50:08.797022972Z caller=main.go:389 msg="Unhandled error received. Exiting..." err="getting CRD: Alertmanager: customresourcedefinitions.apiextensions.k8s.io \"alertmanagers.monitoring.coreos.com\" is forbidden: User \"system:serviceaccount:openshift-monitoring:prometheus-operator\" cannot get resource \"customresourcedefinitions\" in API group \"apiextensions.k8s.io\" at the cluster scope"\n
Sep 18 15:51:09.884 E ns/openshift-marketplace pod/redhat-marketplace-595b87f484-nbhrh node/ip-10-0-184-151.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Sep 18 15:51:38.002 E ns/openshift-marketplace pod/community-operators-fd56f665d-66m5t node/ip-10-0-184-151.us-west-2.compute.internal container/community-operators container exited with code 2 (Error): 
Sep 18 15:51:38.184 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 18 15:51:50.003 E ns/openshift-marketplace pod/certified-operators-785b456bc6-pv2pg node/ip-10-0-184-151.us-west-2.compute.internal container/certified-operators container exited with code 2 (Error): 
Sep 18 15:53:31.093 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: 2 of 3 members are available, ip-10-0-164-108.us-west-2.compute.internal is unhealthy