ResultSUCCESS
Tests 4 failed / 22 succeeded
Started2020-09-19 13:36
Elapsed1h18m
Work namespaceci-op-7bmybqjy
pod18340db3-fa7d-11ea-a1fd-0a580a800db2
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 33m12s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 13s of 30m15s (1%):

Sep 19 14:16:07.167 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 14:16:08.167 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service is not responding to GET requests over new connections
Sep 19 14:16:08.219 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests over new connections
Sep 19 14:17:44.167 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 14:17:44.175 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests over new connections
Sep 19 14:28:31.167 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 14:28:31.171 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests over new connections
Sep 19 14:28:44.167 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 14:28:44.172 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests over new connections
Sep 19 14:32:16.167 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests on reused connections
Sep 19 14:32:16.170 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests on reused connections
Sep 19 14:34:26.167 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 14:34:26.172 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests over new connections
Sep 19 14:34:39.167 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 14:34:39.171 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests over new connections
Sep 19 14:35:00.245 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 14:35:00.248 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests over new connections
Sep 19 14:35:28.167 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests on reused connections
Sep 19 14:35:28.169 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests on reused connections
Sep 19 14:37:39.167 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 14:37:39.171 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests over new connections
Sep 19 14:37:43.253 E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service stopped responding to GET requests over new connections
Sep 19 14:37:44.167 - 1s    E ns/e2e-k8s-service-lb-available-9236 svc/service-test Service is not responding to GET requests over new connections
Sep 19 14:37:46.329 I ns/e2e-k8s-service-lb-available-9236 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1600526655.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 32m12s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 1m7s of 32m11s (3%):

Sep 19 14:13:09.097 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 14:13:09.149 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 14:13:10.097 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 14:13:10.111 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 14:13:16.097 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 19 14:13:16.140 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 19 14:13:20.098 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 14:13:20.126 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 14:13:31.097 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 19 14:13:32.097 - 8s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 19 14:13:33.097 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 14:13:33.126 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 14:13:37.097 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 14:13:38.097 - 6s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 19 14:13:41.140 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 19 14:13:44.139 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 14:26:50.097 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 14:26:50.097 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 14:26:50.126 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 14:26:51.097 - 9s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 19 14:26:55.097 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 19 14:26:55.141 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 19 14:27:00.159 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 14:27:03.097 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 14:27:03.123 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 14:29:45.097 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 14:29:46.097 - 9s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 19 14:29:46.097 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 14:29:47.097 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 19 14:29:53.097 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 19 14:29:53.125 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 19 14:29:55.152 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 14:29:56.136 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 19 14:35:39.097 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 19 14:35:39.097 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 19 14:35:39.133 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 19 14:35:40.097 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 19 14:35:49.126 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
				from junit_upgrade_1600526655.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 32m12s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1s of 32m11s (0%):

Sep 19 14:10:28.854 I openshift-apiserver OpenShift API stopped responding to GET requests: etcdserver: leader changed
Sep 19 14:10:29.695 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 19 14:10:29.701 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1600526655.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 37m33s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
100 error level events were detected during this test run:

Sep 19 14:08:47.887 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 586)
Sep 19 14:10:35.595 E kube-apiserver Kube API started failing: Get https://api.ci-op-7bmybqjy-a21cb.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded
Sep 19 14:12:17.152 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-7bc75fb8cb-qlj7r node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/kube-storage-version-migrator-operator container exited with code 1 (Error): cted","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-19T14:10:36Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-19T13:44:49Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0919 14:10:36.317546       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"9b62df62-50ca-48df-8e9a-037b6b88595a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "TargetDegraded: \"deployments\": etcdserver: request timed out\nTargetDegraded: " to "",Progressing changed from True to False (""),Available changed from False to True ("")\nI0919 14:12:16.413277       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0919 14:12:16.413665       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0919 14:12:16.413715       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from runtime/asm_amd64.s:1357\nI0919 14:12:16.413764       1 reflector.go:181] Stopping reflector *v1.Deployment (10m0s) from runtime/asm_amd64.s:1357\nI0919 14:12:16.413794       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from runtime/asm_amd64.s:1357\nI0919 14:12:16.413824       1 reflector.go:181] Stopping reflector *unstructured.Unstructured (12h0m0s) from runtime/asm_amd64.s:1357\nI0919 14:12:16.413841       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nI0919 14:12:16.413858       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0919 14:12:16.413871       1 base_controller.go:101] Shutting down LoggingSyncer ...\nW0919 14:12:16.413934       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
Sep 19 14:12:23.394 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-7bmybqjy-a21cb-jrng4-master-0 node/ci-op-7bmybqjy-a21cb-jrng4-master-0 container/setup init container exited with code 124 (Error): ................................................................................
Sep 19 14:12:30.360 E ns/openshift-kube-storage-version-migrator pod/migrator-9f45448cd-qf6nz node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/migrator container exited with code 2 (Error): I0919 14:01:32.684351       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Sep 19 14:12:31.499 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-7bmybqjy-a21cb-jrng4-master-0 node/ci-op-7bmybqjy-a21cb-jrng4-master-0 container/cluster-policy-controller container exited with code 255 (Error): tch *v1.ClusterResourceQuota: Get https://localhost:6443/apis/quota.openshift.io/v1/clusterresourcequotas?allowWatchBookmarks=true&resourceVersion=25717&timeout=9m8s&timeoutSeconds=548&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 14:12:30.389535       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.DaemonSet: Get https://localhost:6443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=26669&timeout=7m47s&timeoutSeconds=467&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 14:12:30.390124       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=24429&timeout=8m58s&timeoutSeconds=538&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 14:12:30.391179       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.BuildConfig: Get https://localhost:6443/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=26451&timeout=7m30s&timeoutSeconds=450&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 14:12:30.402275       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1beta1.EndpointSlice: Get https://localhost:6443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=27018&timeout=9m36s&timeoutSeconds=576&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 14:12:30.407359       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=24514&timeout=5m37s&timeoutSeconds=337&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0919 14:12:30.529562       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0919 14:12:30.529625       1 policy_controller.go:94] leaderelection lost\n
Sep 19 14:12:32.531 E ns/openshift-cluster-machine-approver pod/machine-approver-79b69967d8-hrntc node/ci-op-7bmybqjy-a21cb-jrng4-master-0 container/machine-approver-controller container exited with code 2 (Error): ue&resourceVersion=26609&timeoutSeconds=492&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 14:12:29.287801       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=25446&timeoutSeconds=374&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 14:12:30.287694       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=26609&timeoutSeconds=352&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 14:12:30.288573       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=25446&timeoutSeconds=477&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 14:12:31.288541       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=26609&timeoutSeconds=544&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0919 14:12:31.289452       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=25446&timeoutSeconds=508&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
Sep 19 14:12:46.437 E ns/openshift-monitoring pod/node-exporter-sr258 node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/node-exporter container exited with code 143 (Error): -19T13:56:29Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:56:29Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:12:48.430 E ns/openshift-monitoring pod/openshift-state-metrics-b749f8f74-2d4bt node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/openshift-state-metrics container exited with code 2 (Error): 
Sep 19 14:12:49.459 E ns/openshift-monitoring pod/kube-state-metrics-67654b5b96-5r6kc node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/kube-state-metrics container exited with code 2 (Error): 
Sep 19 14:13:14.852 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T13:57:36.282Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T13:57:36.285Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T13:57:36.286Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T13:57:36.287Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T13:57:36.287Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T13:57:36.287Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T13:57:36.287Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T13:57:36.287Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T13:57:36.287Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T13:57:36.287Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T13:57:36.287Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T13:57:36.287Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T13:57:36.287Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T13:57:36.287Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T13:57:36.289Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T13:57:36.289Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 14:13:14.852 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/19 13:57:37 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/09/19 14:06:35 config map updated\n2020/09/19 14:06:35 successfully triggered reload\n
Sep 19 14:13:14.852 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-19T13:57:37.615357817Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-09-19T13:57:37.616946484Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-19T13:57:42.71825105Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-19T13:57:42.718339178Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-09-19T13:57:42.826010914Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-19T14:00:42.819231133Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-19T14:06:42.839245429Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 19 14:13:25.299 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/config-reloader container exited with code 2 (Error): 2020/09/19 13:57:22 Watching directory: "/etc/alertmanager/config"\n
Sep 19 14:13:25.299 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/alertmanager-proxy container exited with code 2 (Error): 2020/09/19 13:57:22 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 13:57:22 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 13:57:22 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 13:57:22 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 13:57:22 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 13:57:22 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 13:57:22 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 13:57:22 http.go:107: HTTPS: listening on [::]:9095\nI0919 13:57:22.724711       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 19 14:13:27.888 E ns/openshift-monitoring pod/node-exporter-b69xw node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/node-exporter container exited with code 143 (Error): -19T13:53:03Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-19T13:53:03Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 19 14:13:28.985 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:13:26.440Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:13:26.447Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T14:13:26.447Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T14:13:26.448Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:13:26.448Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T14:13:26.448Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:13:26.448Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:13:26.448Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:13:26.448Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:13:26.448Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:13:26.448Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:13:26.448Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:13:26.448Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:13:26.448Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:13:26.450Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:13:26.450Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 14:13:36.527 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/19 13:57:42 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/09/19 14:06:57 config map updated\n2020/09/19 14:06:57 successfully triggered reload\n
Sep 19 14:13:36.527 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/prometheus-proxy container exited with code 2 (Error): 2020/09/19 13:57:43 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 13:57:43 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 13:57:43 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 13:57:43 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 13:57:43 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 13:57:43 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 13:57:43 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 13:57:43 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/19 13:57:43 http.go:107: HTTPS: listening on [::]:9091\nI0919 13:57:43.196055       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 14:13:12 oauthproxy.go:774: basicauth: 10.129.2.23:35180 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 19 14:13:36.527 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-19T13:57:41.983660528Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-09-19T13:57:41.985928043Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-19T13:57:47.191755028Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-19T13:57:47.19183351Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-09-19T13:57:47.333962106Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-19T14:00:47.287274495Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-19T14:09:47.322172511Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 19 14:13:49.990 E ns/openshift-console-operator pod/console-operator-7b49f8dc7b-k7m5x node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/console-operator container exited with code 1 (Error): 40       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0919 14:13:49.011662       1 reflector.go:181] Stopping reflector *v1.Service (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0919 14:13:49.011692       1 reflector.go:181] Stopping reflector *v1.Infrastructure (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0919 14:13:49.011718       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0919 14:13:49.011738       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0919 14:13:49.011769       1 base_controller.go:101] Shutting down StatusSyncer_console ...\nI0919 14:13:49.011797       1 base_controller.go:101] Shutting down LoggingSyncer ...\nI0919 14:13:49.011815       1 base_controller.go:101] Shutting down ManagementStateController ...\nI0919 14:13:49.011829       1 base_controller.go:101] Shutting down UnsupportedConfigOverridesController ...\nI0919 14:13:49.011841       1 base_controller.go:101] Shutting down ResourceSyncController ...\nI0919 14:13:49.011851       1 controller.go:70] Shutting down Console\nI0919 14:13:49.011864       1 controller.go:377] shutting down ConsoleRouteSyncController\nI0919 14:13:49.011877       1 controller.go:115] shutting down ConsoleResourceSyncDestinationController\nI0919 14:13:49.011890       1 controller.go:181] shutting down ConsoleServiceSyncController\nI0919 14:13:49.011910       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0919 14:13:49.012012       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0919 14:13:49.012083       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nW0919 14:13:49.012171       1 builder.go:96] graceful termination failed, controllers failed with error: stopped\n
Sep 19 14:13:55.880 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:13:52.521Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:13:52.525Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T14:13:52.526Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:13:52.527Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:13:52.527Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:13:52.531Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:13:52.531Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 14:14:38.670 E ns/openshift-marketplace pod/certified-operators-8bf4476fb-tdgkm node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/certified-operators container exited with code 2 (Error): 
Sep 19 14:14:39.679 E ns/openshift-marketplace pod/community-operators-847f4bc97c-9q922 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/community-operators container exited with code 2 (Error): 
Sep 19 14:15:49.659 E ns/openshift-sdn pod/sdn-controller-9xb4m node/ci-op-7bmybqjy-a21cb-jrng4-master-2 container/sdn-controller container exited with code 2 (Error):  1 subnets.go:212] Cleared node NetworkUnavailable/NoRouteCreated condition for ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk\nI0919 13:55:36.511649       1 subnets.go:150] Created HostSubnet ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk (host: "ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk", ip: "10.0.32.4", subnet: "10.128.2.0/23")\nI0919 13:55:40.179501       1 subnets.go:212] Cleared node NetworkUnavailable/NoRouteCreated condition for ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf\nI0919 13:55:40.215155       1 subnets.go:150] Created HostSubnet ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf (host: "ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf", ip: "10.0.32.2", subnet: "10.129.2.0/23")\nI0919 14:06:44.213488       1 vnids.go:116] Allocated netid 9912263 for namespace "e2e-kubernetes-api-available-1755"\nI0919 14:06:44.235661       1 vnids.go:116] Allocated netid 11939922 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-191"\nI0919 14:06:44.251513       1 vnids.go:116] Allocated netid 14521336 for namespace "e2e-openshift-api-available-9020"\nI0919 14:06:44.279253       1 vnids.go:116] Allocated netid 14673165 for namespace "e2e-k8s-service-lb-available-9236"\nI0919 14:06:44.301563       1 vnids.go:116] Allocated netid 1695908 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-5831"\nI0919 14:06:44.323568       1 vnids.go:116] Allocated netid 7022518 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-9526"\nI0919 14:06:44.356930       1 vnids.go:116] Allocated netid 14563717 for namespace "e2e-frontend-ingress-available-4916"\nI0919 14:06:44.413076       1 vnids.go:116] Allocated netid 5071879 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-3711"\nI0919 14:06:44.429650       1 vnids.go:116] Allocated netid 13426701 for namespace "e2e-k8s-sig-apps-deployment-upgrade-4616"\nI0919 14:06:44.447542       1 vnids.go:116] Allocated netid 5365168 for namespace "e2e-k8s-sig-apps-job-upgrade-4261"\nI0919 14:06:44.476572       1 vnids.go:116] Allocated netid 2246163 for namespace "e2e-check-for-critical-alerts-7279"\n
Sep 19 14:16:17.890 E ns/openshift-multus pod/multus-l46nh node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/kube-multus container exited with code 137 (Error): 
Sep 19 14:17:03.596 E ns/openshift-multus pod/multus-pnn4d node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/kube-multus container exited with code 137 (Error): 
Sep 19 14:17:31.237 E ns/openshift-multus pod/multus-admission-controller-pn99b node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/multus-admission-controller container exited with code 137 (Error): 
Sep 19 14:17:43.690 E ns/openshift-sdn pod/sdn-zqzmz node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/sdn container exited with code 255 (Error): penshift-machine-api/machine-api-operator:https" at 172.30.161.26:8443/TCP\nI0919 14:17:25.568063   63523 proxier.go:813] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0919 14:17:25.675373   63523 proxier.go:370] userspace proxy: processing 0 service events\nI0919 14:17:25.675914   63523 proxier.go:349] userspace syncProxyRules took 110.596462ms\nI0919 14:17:25.694342   63523 proxier.go:370] userspace proxy: processing 0 service events\nI0919 14:17:25.696432   63523 proxier.go:349] userspace syncProxyRules took 128.901398ms\nI0919 14:17:25.762803   63523 proxier.go:1656] Opened local port "nodePort for openshift-ingress/router-default:http" (:31550/tcp)\nI0919 14:17:25.763272   63523 proxier.go:1656] Opened local port "nodePort for e2e-k8s-service-lb-available-9236/service-test:" (:32623/tcp)\nI0919 14:17:25.763638   63523 proxier.go:1656] Opened local port "nodePort for openshift-ingress/router-default:https" (:32190/tcp)\nI0919 14:17:25.798924   63523 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 32501\nI0919 14:17:25.806402   63523 proxy.go:311] openshift-sdn proxy services and endpoints initialized\nI0919 14:17:25.806442   63523 cmd.go:172] openshift-sdn network plugin registering startup\nI0919 14:17:25.806559   63523 cmd.go:176] openshift-sdn network plugin ready\nI0919 14:17:36.288843   63523 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.72:6443 10.129.0.78:6443 10.130.0.75:6443]\nI0919 14:17:36.288913   63523 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.72:8443 10.129.0.78:8443 10.130.0.75:8443]\nI0919 14:17:36.417224   63523 proxier.go:370] userspace proxy: processing 0 service events\nI0919 14:17:36.417615   63523 proxier.go:349] userspace syncProxyRules took 30.580784ms\nF0919 14:17:43.249542   63523 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 19 14:17:59.366 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-7bmybqjy-a21cb-jrng4-master-1 node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/setup init container exited with code 124 (Error): ................................................................................
Sep 19 14:18:12.156 E ns/openshift-sdn pod/sdn-d9sfj node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/sdn container exited with code 255 (Error): :17:00.602868   86974 proxier.go:349] userspace syncProxyRules took 34.950435ms\nI0919 14:17:00.739940   86974 proxier.go:370] userspace proxy: processing 0 service events\nI0919 14:17:00.740749   86974 proxier.go:349] userspace syncProxyRules took 34.522305ms\nI0919 14:17:36.288565   86974 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.72:6443 10.129.0.78:6443 10.130.0.75:6443]\nI0919 14:17:36.288672   86974 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.72:8443 10.129.0.78:8443 10.130.0.75:8443]\nI0919 14:17:36.431865   86974 proxier.go:370] userspace proxy: processing 0 service events\nI0919 14:17:36.432280   86974 proxier.go:349] userspace syncProxyRules took 33.61373ms\nI0919 14:17:58.374383   86974 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.0.3:10257 10.0.0.5:10257]\nI0919 14:17:58.374431   86974 roundrobin.go:217] Delete endpoint 10.0.0.4:10257 for service "openshift-kube-controller-manager/kube-controller-manager:https"\nI0919 14:17:58.537308   86974 proxier.go:370] userspace proxy: processing 0 service events\nI0919 14:17:58.537763   86974 proxier.go:349] userspace syncProxyRules took 41.915769ms\nI0919 14:17:59.391816   86974 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.0.3:10257 10.0.0.4:10257 10.0.0.5:10257]\nI0919 14:17:59.545611   86974 proxier.go:370] userspace proxy: processing 0 service events\nI0919 14:17:59.546056   86974 proxier.go:349] userspace syncProxyRules took 35.379266ms\nI0919 14:18:01.630634   86974 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0919 14:18:11.544532   86974 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 19 14:21:48.851 E ns/openshift-machine-config-operator pod/machine-config-operator-6ccc76d79c-59zsp node/ci-op-7bmybqjy-a21cb-jrng4-master-0 container/machine-config-operator container exited with code 2 (Error):  DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-6ccc76d79c-59zsp_844afd04-28c8-4bf1-9a64-96138db47f57\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-09-19T13:50:42Z\",\"renewTime\":\"2020-09-19T13:50:42Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"machine-config-operator", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000357360), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000357380)}}}, Immutable:(*bool)(nil), Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-6ccc76d79c-59zsp_844afd04-28c8-4bf1-9a64-96138db47f57 became leader'\nI0919 13:50:42.240338       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0919 13:50:42.770710       1 operator.go:265] Starting MachineConfigOperator\nE0919 13:51:36.815975       1 operator.go:331] openshift-config-managed/kube-cloud-config configmap is required on platform GCP but not found: configmap "kube-cloud-config" not found\nE0919 13:53:01.053317       1 operator.go:331] openshift-config-managed/kube-cloud-config configmap is required on platform GCP but not found: configmap "kube-cloud-config" not found\nI0919 13:54:25.169921       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"b0320567-8465-4b85-a31b-42ce440e98ef", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator version changed from [] to [{operator 4.5.0-0.ci-2020-09-18-103604}]\n
Sep 19 14:23:46.565 E ns/openshift-machine-config-operator pod/machine-config-daemon-q6lvc node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/oauth-proxy container exited with code 143 (Error): 
Sep 19 14:23:55.385 E ns/openshift-machine-config-operator pod/machine-config-daemon-r8272 node/ci-op-7bmybqjy-a21cb-jrng4-master-0 container/oauth-proxy container exited with code 143 (Error): 
Sep 19 14:23:59.935 E ns/openshift-machine-config-operator pod/machine-config-daemon-9hjkd node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/oauth-proxy container exited with code 143 (Error): 
Sep 19 14:24:07.086 E ns/openshift-machine-config-operator pod/machine-config-daemon-d86hc node/ci-op-7bmybqjy-a21cb-jrng4-master-2 container/oauth-proxy container exited with code 143 (Error): 
Sep 19 14:24:11.906 E ns/openshift-machine-config-operator pod/machine-config-daemon-p622t node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/oauth-proxy container exited with code 143 (Error): 
Sep 19 14:24:21.047 E ns/openshift-machine-config-operator pod/machine-config-daemon-dhnbh node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/oauth-proxy container exited with code 143 (Error): 
Sep 19 14:26:29.013 E ns/openshift-machine-config-operator pod/machine-config-server-w7v7n node/ci-op-7bmybqjy-a21cb-jrng4-master-0 container/machine-config-server container exited with code 2 (Error): I0919 13:53:59.496384       1 start.go:38] Version: machine-config-daemon-4.5.0-202006231303.p0-40-g08aad192-dirty (08aad1925d6e29266390ecb6f4e6730d60e44aaf)\nI0919 13:53:59.497720       1 api.go:56] Launching server on :22624\nI0919 13:53:59.497856       1 api.go:56] Launching server on :22623\nI0919 13:54:00.277697       1 api.go:102] Pool worker requested by 10.0.32.2:40530\nI0919 13:54:02.818377       1 api.go:102] Pool worker requested by 10.0.32.4:53800\n
Sep 19 14:26:34.628 E ns/openshift-machine-config-operator pod/machine-config-server-gnd8n node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/machine-config-server container exited with code 2 (Error): I0919 13:53:58.734020       1 start.go:38] Version: machine-config-daemon-4.5.0-202006231303.p0-40-g08aad192-dirty (08aad1925d6e29266390ecb6f4e6730d60e44aaf)\nI0919 13:53:58.735426       1 api.go:56] Launching server on :22624\nI0919 13:53:58.735468       1 api.go:56] Launching server on :22623\nI0919 13:54:01.650931       1 api.go:102] Pool worker requested by 10.0.32.3:42844\n
Sep 19 14:26:42.452 E ns/openshift-marketplace pod/certified-operators-68597d4644-wzr2z node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/certified-operators container exited with code 2 (Error): 
Sep 19 14:26:42.559 E ns/openshift-marketplace pod/community-operators-6ffcd6f549-r8td9 node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/community-operators container exited with code 2 (Error): 
Sep 19 14:26:43.790 E ns/openshift-monitoring pod/telemeter-client-8f97d74dc-z2bcv node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/telemeter-client container exited with code 2 (Error): 
Sep 19 14:26:43.790 E ns/openshift-monitoring pod/telemeter-client-8f97d74dc-z2bcv node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/reload container exited with code 2 (Error): 
Sep 19 14:26:43.888 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/config-reloader container exited with code 2 (Error): 2020/09/19 14:13:41 Watching directory: "/etc/alertmanager/config"\n
Sep 19 14:26:43.888 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/alertmanager-proxy container exited with code 2 (Error): 2020/09/19 14:13:41 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:13:41 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 14:13:41 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 14:13:41 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 14:13:41 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 14:13:41 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:13:41 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0919 14:13:41.730859       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 14:13:41 http.go:107: HTTPS: listening on [::]:9095\n
Sep 19 14:26:44.876 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:13:52.521Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:13:52.525Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T14:13:52.526Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:13:52.527Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:13:52.527Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:13:52.527Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:13:52.531Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:13:52.531Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 14:26:44.876 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/19 14:13:54 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 19 14:26:44.876 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/prometheus-proxy container exited with code 2 (Error): 2020/09/19 14:13:55 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 14:13:55 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 14:13:55 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 14:13:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 14:13:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 14:13:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 14:13:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 14:13:55 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/19 14:13:55 http.go:107: HTTPS: listening on [::]:9091\nI0919 14:13:55.044813       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 14:14:12 oauthproxy.go:774: basicauth: 10.129.2.23:36554 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:18:42 oauthproxy.go:774: basicauth: 10.129.2.23:42364 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:23:12 oauthproxy.go:774: basicauth: 10.129.2.23:47362 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 19 14:26:44.876 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-19T14:13:53.929133729Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-09-19T14:13:53.931628209Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-19T14:13:59.276728553Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-19T14:13:59.276806671Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Sep 19 14:26:47.139 E ns/openshift-machine-config-operator pod/machine-config-server-l7bmv node/ci-op-7bmybqjy-a21cb-jrng4-master-2 container/machine-config-server container exited with code 2 (Error): I0919 13:53:58.734355       1 start.go:38] Version: machine-config-daemon-4.5.0-202006231303.p0-40-g08aad192-dirty (08aad1925d6e29266390ecb6f4e6730d60e44aaf)\nI0919 13:53:58.736126       1 api.go:56] Launching server on :22624\nI0919 13:53:58.736556       1 api.go:56] Launching server on :22623\n
Sep 19 14:26:48.629 E ns/openshift-machine-config-operator pod/machine-config-controller-5fbdd4f9bf-wc9ws node/ci-op-7bmybqjy-a21cb-jrng4-master-0 container/machine-config-controller container exited with code 2 (Error): onfiguration.openshift.io/v1  } {MachineConfig  01-master-container-runtime  machineconfiguration.openshift.io/v1  } {MachineConfig  01-master-kubelet  machineconfiguration.openshift.io/v1  } {MachineConfig  99-master-0d0a834f-dda2-454d-91df-52f92d0aa230-registries  machineconfiguration.openshift.io/v1  } {MachineConfig  99-master-ssh  machineconfiguration.openshift.io/v1  }]\nI0919 14:26:32.385567       1 render_controller.go:516] Pool master: now targeting: rendered-master-0d15859826ccec3a3cc655db942a486d\nI0919 14:26:37.025694       1 node_controller.go:759] Setting node ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf to desired config rendered-worker-ddb30a1c58609bd8ed758f64a8d54509\nI0919 14:26:37.057897       1 node_controller.go:453] Pool worker: node ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-ddb30a1c58609bd8ed758f64a8d54509\nI0919 14:26:37.385648       1 node_controller.go:759] Setting node ci-op-7bmybqjy-a21cb-jrng4-master-0 to desired config rendered-master-0d15859826ccec3a3cc655db942a486d\nI0919 14:26:37.415663       1 node_controller.go:453] Pool master: node ci-op-7bmybqjy-a21cb-jrng4-master-0 changed machineconfiguration.openshift.io/desiredConfig = rendered-master-0d15859826ccec3a3cc655db942a486d\nI0919 14:26:38.072475       1 node_controller.go:453] Pool worker: node ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf changed machineconfiguration.openshift.io/state = Working\nI0919 14:26:38.435732       1 node_controller.go:453] Pool master: node ci-op-7bmybqjy-a21cb-jrng4-master-0 changed machineconfiguration.openshift.io/state = Working\nI0919 14:26:38.882506       1 node_controller.go:434] Pool worker: node ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf is now reporting unready: node ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf is reporting Unschedulable\nI0919 14:26:41.229430       1 node_controller.go:434] Pool master: node ci-op-7bmybqjy-a21cb-jrng4-master-0 is now reporting unready: node ci-op-7bmybqjy-a21cb-jrng4-master-0 is reporting Unschedulable\n
Sep 19 14:27:13.081 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:27:09.430Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:27:09.445Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T14:27:09.446Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:27:09.447Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:27:09.447Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:27:09.451Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:27:09.451Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 14:28:19.596 E kube-apiserver Kube API started failing: Get https://api.ci-op-7bmybqjy-a21cb.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 19 14:28:29.359 E ns/openshift-marketplace pod/community-operators-6ffcd6f549-dd8f2 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/community-operators container exited with code 2 (Error): 
Sep 19 14:28:31.368 E ns/openshift-marketplace pod/redhat-marketplace-6878c9d988-bslgv node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/redhat-marketplace container exited with code 2 (Error): 
Sep 19 14:28:31.389 E ns/openshift-marketplace pod/certified-operators-68597d4644-mxmcd node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/certified-operators container exited with code 2 (Error): 
Sep 19 14:28:48.419 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Sep 19 14:29:06.934 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 19 14:29:15.277 E ns/openshift-sdn pod/sdn-s756k node/ci-op-7bmybqjy-a21cb-jrng4-master-0 container/sdn container exited with code 255 (Error): master-0 failed: pods "recyler-pod-ci-op-7bmybqjy-a21cb-jrng4-master-0" not found\nI0919 14:27:56.061913  105657 pod.go:541] CNI_DEL openshift-infra/recyler-pod-ci-op-7bmybqjy-a21cb-jrng4-master-0\nI0919 14:27:56.137369  105657 pod.go:541] CNI_DEL openshift-infra/recyler-pod-ci-op-7bmybqjy-a21cb-jrng4-master-0\ninterrupt: Gracefully shutting down ...\nI0919 14:28:13.323375  105657 reflector.go:181] Stopping reflector *v1.Namespace (30s) from runtime/asm_amd64.s:1357\nI0919 14:28:13.324172  105657 reflector.go:181] Stopping reflector *v1.Endpoints (30s) from runtime/asm_amd64.s:1357\nI0919 14:28:13.324318  105657 reflector.go:181] Stopping reflector *v1.Pod (30s) from runtime/asm_amd64.s:1357\nI0919 14:28:13.324405  105657 reflector.go:181] Stopping reflector *v1.NetworkPolicy (30s) from runtime/asm_amd64.s:1357\nI0919 14:28:13.324482  105657 reflector.go:181] Stopping reflector *v1.Service (30s) from runtime/asm_amd64.s:1357\nI0919 14:28:13.324716  105657 reflector.go:181] Stopping reflector *v1.HostSubnet (30m0s) from runtime/asm_amd64.s:1357\nI0919 14:28:13.324849  105657 reflector.go:181] Stopping reflector *v1.NetNamespace (30m0s) from runtime/asm_amd64.s:1357\nI0919 14:28:13.324942  105657 reflector.go:181] Stopping reflector *v1.EgressNetworkPolicy (30m0s) from runtime/asm_amd64.s:1357\nI0919 14:29:14.859353    2837 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0919 14:29:14.867542    2837 feature_gate.go:243] feature gates: &{map[]}\nI0919 14:29:14.867668    2837 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0919 14:29:14.867749    2837 cmd.go:216] Watching config file /config/..2020_09_19_14_16_02.118691053/kube-proxy-config.yaml for changes\nF0919 14:29:14.910094    2837 cmd.go:106] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: Get https://api-int.ci-op-7bmybqjy-a21cb.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/clusternetworks/default: dial tcp 10.0.0.2:6443: connect: connection refused\n
Sep 19 14:29:17.666 E ns/openshift-sdn pod/sdn-s756k node/ci-op-7bmybqjy-a21cb-jrng4-master-0 container/sdn container exited with code 255 (Error): I0919 14:29:16.343073    3627 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0919 14:29:16.346398    3627 feature_gate.go:243] feature gates: &{map[]}\nI0919 14:29:16.346513    3627 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0919 14:29:16.346572    3627 cmd.go:216] Watching config file /config/..2020_09_19_14_16_02.118691053/kube-proxy-config.yaml for changes\nF0919 14:29:16.367935    3627 cmd.go:106] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: Get https://api-int.ci-op-7bmybqjy-a21cb.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/clusternetworks/default: dial tcp 10.0.0.2:6443: connect: connection refused\n
Sep 19 14:29:34.740 E ns/openshift-kube-storage-version-migrator pod/migrator-6bb95f5866-d4tld node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/migrator container exited with code 2 (Error): 
Sep 19 14:29:35.738 E ns/openshift-monitoring pod/grafana-56945f44-nsr7l node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/grafana container exited with code 1 (Error): 
Sep 19 14:29:35.738 E ns/openshift-monitoring pod/grafana-56945f44-nsr7l node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/grafana-proxy container exited with code 2 (Error): 
Sep 19 14:29:35.823 E ns/openshift-monitoring pod/prometheus-adapter-777d8c4598-nb2fh node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/prometheus-adapter container exited with code 2 (Error): I0919 14:13:15.432718       1 adapter.go:94] successfully using in-cluster auth\nI0919 14:13:17.090331       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0919 14:13:17.090394       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0919 14:13:17.090920       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0919 14:13:17.091879       1 secure_serving.go:178] Serving securely on [::]:6443\nI0919 14:13:17.093067       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Sep 19 14:29:35.860 E ns/openshift-marketplace pod/community-operators-9f544dbd8-kvspg node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/community-operators container exited with code 2 (Error): 
Sep 19 14:29:37.021 E ns/openshift-monitoring pod/thanos-querier-5f9688686d-xqqwm node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/oauth-proxy container exited with code 2 (Error): 2020/09/19 14:26:43 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/19 14:26:43 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 14:26:43 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 14:26:43 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 14:26:43 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 14:26:43 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/19 14:26:43 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 14:26:43 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0919 14:26:43.287957       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 14:26:43 http.go:107: HTTPS: listening on [::]:9091\n2020/09/19 14:27:13 oauthproxy.go:774: basicauth: 10.130.0.56:43134 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:28:23 oauthproxy.go:774: basicauth: 10.130.0.56:46414 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 19 14:29:37.044 E ns/openshift-marketplace pod/redhat-marketplace-58fb8ff48b-tz6hx node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/redhat-marketplace container exited with code 2 (Error): 
Sep 19 14:29:37.086 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/config-reloader container exited with code 2 (Error): 2020/09/19 14:13:16 Watching directory: "/etc/alertmanager/config"\n
Sep 19 14:29:37.086 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/alertmanager-proxy container exited with code 2 (Error): 2020/09/19 14:13:16 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:13:16 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 14:13:16 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 14:13:16 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 14:13:16 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 14:13:16 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:13:16 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 14:13:16 http.go:107: HTTPS: listening on [::]:9095\nI0919 14:13:16.563578       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nE0919 14:28:30.179754       1 webhook.go:109] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:alertmanager-main" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\n2020/09/19 14:28:30 oauthproxy.go:782: requestauth: 10.131.0.21:55392 tokenreviews.authentication.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:alertmanager-main" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\n
Sep 19 14:29:49.947 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:29:46.168Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T14:29:46.180Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:29:46.183Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T14:29:46.184Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:29:46.184Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T14:29:46.184Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:29:46.184Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:29:46.184Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:29:46.184Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:29:46.184Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:29:46.184Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:29:46.184Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:29:46.184Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:29:46.184Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:29:46.187Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:29:46.187Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 14:30:00.948 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Sep 19 14:30:02.670 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-f9c98d476-nf8hk node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/operator container exited with code 1 (Error): https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0919 14:29:58.426423       1 httplog.go:90] verb="GET" URI="/metrics" latency=16.774223ms resp=200 UserAgent="Prometheus/2.15.2" srcIP="10.128.2.38:37494": \nI0919 14:30:01.409713       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0919 14:30:01.410171       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0919 14:30:01.410270       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0919 14:30:01.410575       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0919 14:30:01.416712       1 base_controller.go:101] Shutting down StatusSyncer_openshift-controller-manager ...\nI0919 14:30:01.416866       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0919 14:30:01.417433       1 base_controller.go:101] Shutting down ConfigObserver ...\nI0919 14:30:01.417542       1 base_controller.go:58] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0919 14:30:01.417606       1 base_controller.go:48] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0919 14:30:01.417682       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0919 14:30:01.416914       1 operator.go:141] Shutting down OpenShiftControllerManagerOperator\nI0919 14:30:01.418635       1 base_controller.go:58] Shutting down worker of ConfigObserver controller ...\nI0919 14:30:01.418678       1 base_controller.go:48] All ConfigObserver workers have been terminated\nW0919 14:30:01.416931       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\n
Sep 19 14:30:31.541 E ns/openshift-console pod/console-6f4b4d584b-fmtgp node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/console container exited with code 2 (Error): 2020-09-19T14:14:06Z cmd/main: cookies are secure!\n2020-09-19T14:14:06Z cmd/main: Binding to [::]:8443...\n2020-09-19T14:14:06Z cmd/main: using TLS\n2020-09-19T14:28:14Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: dial tcp 172.30.0.1:443: connect: connection refused\n
Sep 19 14:32:38.457 E ns/openshift-sdn pod/sdn-7b9jp node/ci-op-7bmybqjy-a21cb-jrng4-master-1 container/sdn container exited with code 255 (Error): master-1 failed: pods "recyler-pod-ci-op-7bmybqjy-a21cb-jrng4-master-1" not found\nI0919 14:31:33.179947  108509 pod.go:541] CNI_DEL openshift-infra/recyler-pod-ci-op-7bmybqjy-a21cb-jrng4-master-1\nI0919 14:31:33.244552  108509 pod.go:541] CNI_DEL openshift-infra/recyler-pod-ci-op-7bmybqjy-a21cb-jrng4-master-1\ninterrupt: Gracefully shutting down ...\nI0919 14:31:35.269998  108509 reflector.go:181] Stopping reflector *v1.EgressNetworkPolicy (30m0s) from runtime/asm_amd64.s:1357\nI0919 14:31:35.270116  108509 reflector.go:181] Stopping reflector *v1.Endpoints (30s) from runtime/asm_amd64.s:1357\nI0919 14:31:35.270160  108509 reflector.go:181] Stopping reflector *v1.Pod (30s) from runtime/asm_amd64.s:1357\nI0919 14:31:35.270201  108509 reflector.go:181] Stopping reflector *v1.Service (30s) from runtime/asm_amd64.s:1357\nI0919 14:31:35.270301  108509 reflector.go:181] Stopping reflector *v1.NetNamespace (30m0s) from runtime/asm_amd64.s:1357\nI0919 14:31:35.270353  108509 reflector.go:181] Stopping reflector *v1.HostSubnet (30m0s) from runtime/asm_amd64.s:1357\nI0919 14:31:35.270404  108509 reflector.go:181] Stopping reflector *v1.NetworkPolicy (30s) from runtime/asm_amd64.s:1357\nI0919 14:31:35.270442  108509 reflector.go:181] Stopping reflector *v1.Namespace (30s) from runtime/asm_amd64.s:1357\nI0919 14:32:37.511650    2961 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0919 14:32:37.522266    2961 feature_gate.go:243] feature gates: &{map[]}\nI0919 14:32:37.522407    2961 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0919 14:32:37.522498    2961 cmd.go:216] Watching config file /config/..2020_09_19_14_16_44.435460454/kube-proxy-config.yaml for changes\nF0919 14:32:37.559317    2961 cmd.go:106] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: Get https://api-int.ci-op-7bmybqjy-a21cb.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/clusternetworks/default: dial tcp 10.0.0.2:6443: connect: connection refused\n
Sep 19 14:33:24.167 E ns/openshift-console-operator pod/console-operator-8495dd4cdb-nsdrj node/ci-op-7bmybqjy-a21cb-jrng4-master-2 container/console-operator container exited with code 1 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-console-operator_console-operator-8495dd4cdb-nsdrj_e9787a6f-1b0b-46bb-8e82-d4204c20ea94/console-operator/0.log": lstat /var/log/pods/openshift-console-operator_console-operator-8495dd4cdb-nsdrj_e9787a6f-1b0b-46bb-8e82-d4204c20ea94/console-operator/0.log: no such file or directory
Sep 19 14:33:27.710 E ns/openshift-machine-api pod/machine-api-operator-687fc96f7c-fgdk6 node/ci-op-7bmybqjy-a21cb-jrng4-master-2 container/machine-api-operator container exited with code 2 (Error): 
Sep 19 14:33:27.749 E ns/openshift-insights pod/insights-operator-7668c94c8f-nw945 node/ci-op-7bmybqjy-a21cb-jrng4-master-2 container/operator container exited with code 2 (Error):       1 status.go:304] The operator is marked as disabled\nI0919 14:31:00.342311       1 httplog.go:90] GET /metrics: (8.707282ms) 200 [Prometheus/2.15.2 10.128.2.38:60240]\nI0919 14:31:10.099230       1 httplog.go:90] GET /metrics: (2.407828ms) 200 [Prometheus/2.15.2 10.129.2.19:38738]\nI0919 14:31:30.342533       1 httplog.go:90] GET /metrics: (8.928707ms) 200 [Prometheus/2.15.2 10.128.2.38:60240]\nI0919 14:31:40.099405       1 httplog.go:90] GET /metrics: (2.67039ms) 200 [Prometheus/2.15.2 10.129.2.19:38738]\nI0919 14:31:51.431743       1 configobserver.go:68] Refreshing configuration from cluster pull secret\nI0919 14:31:51.436497       1 configobserver.go:93] Found cloud.openshift.com token\nI0919 14:31:51.436536       1 configobserver.go:110] Refreshing configuration from cluster secret\nI0919 14:32:00.341303       1 httplog.go:90] GET /metrics: (7.694018ms) 200 [Prometheus/2.15.2 10.128.2.38:60240]\nI0919 14:32:10.099760       1 httplog.go:90] GET /metrics: (3.059667ms) 200 [Prometheus/2.15.2 10.129.2.19:38738]\nI0919 14:32:13.406063       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 1 items received\nI0919 14:32:30.342678       1 httplog.go:90] GET /metrics: (9.061648ms) 200 [Prometheus/2.15.2 10.128.2.38:60240]\nI0919 14:32:31.403954       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 1 items received\nI0919 14:32:40.099641       1 httplog.go:90] GET /metrics: (2.806266ms) 200 [Prometheus/2.15.2 10.129.2.19:38738]\nI0919 14:32:51.437149       1 status.go:158] Number of last upload failures 1 lower than threshold 5. Not marking as degraded.\nI0919 14:32:51.437307       1 status.go:304] The operator is marked as disabled\nI0919 14:33:00.343412       1 httplog.go:90] GET /metrics: (9.879208ms) 200 [Prometheus/2.15.2 10.128.2.38:60240]\nI0919 14:33:10.098964       1 httplog.go:90] GET /metrics: (2.158439ms) 200 [Prometheus/2.15.2 10.129.2.19:38738]\n
Sep 19 14:35:19.977 E ns/openshift-marketplace pod/community-operators-9f544dbd8-6gvwt node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/community-operators container exited with code 2 (Error): 
Sep 19 14:35:20.005 E ns/openshift-marketplace pod/redhat-operators-7fd5d6b8c6-2hk69 node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/redhat-operators container exited with code 2 (Error): 
Sep 19 14:35:20.025 E ns/openshift-marketplace pod/certified-operators-864f9984bd-lf45m node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/certified-operators container exited with code 2 (Error): 
Sep 19 14:35:20.047 E ns/openshift-marketplace pod/redhat-marketplace-58fb8ff48b-4pvsq node/ci-op-7bmybqjy-a21cb-jrng4-worker-b-np6mf container/redhat-marketplace container exited with code 2 (Error): 
Sep 19 14:35:29.243 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-544d5cfb5d-gx7m4 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/snapshot-controller container exited with code 2 (Error): 
Sep 19 14:35:29.382 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/config-reloader container exited with code 2 (Error): 2020/09/19 14:13:30 Watching directory: "/etc/alertmanager/config"\n
Sep 19 14:35:29.382 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/alertmanager-proxy container exited with code 2 (Error): 2020/09/19 14:13:30 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:13:30 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 14:13:30 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 14:13:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 14:13:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 14:13:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:13:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0919 14:13:30.463736       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 14:13:30 http.go:107: HTTPS: listening on [::]:9095\nE0919 14:28:30.181419       1 webhook.go:109] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:alertmanager-main" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\n2020/09/19 14:28:30 oauthproxy.go:782: requestauth: 10.131.0.21:53798 tokenreviews.authentication.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:alertmanager-main" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\n
Sep 19 14:35:29.472 E ns/openshift-monitoring pod/prometheus-adapter-777d8c4598-5qh9c node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/prometheus-adapter container exited with code 2 (Error): ser "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0919 14:14:17.167193       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0919 14:17:21.307023       1 webhook.go:197] Failed to make webhook authorizer request: context canceled\nE0919 14:17:21.307137       1 errors.go:77] context canceled\nE0919 14:27:19.163910       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0919 14:27:19.164049       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0919 14:27:19.179022       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0919 14:27:19.179174       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nW0919 14:28:13.972101       1 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\n
Sep 19 14:35:29.522 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/config-reloader container exited with code 2 (Error): 2020/09/19 14:26:52 Watching directory: "/etc/alertmanager/config"\n
Sep 19 14:35:29.522 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/alertmanager-proxy container exited with code 2 (Error): 2020/09/19 14:26:53 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:26:53 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 14:26:53 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 14:26:53 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 14:26:53 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 14:26:53 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 14:26:53 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 14:26:53 http.go:107: HTTPS: listening on [::]:9095\nI0919 14:26:53.414002       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 19 14:35:30.403 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:27:09.430Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:27:09.445Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T14:27:09.446Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:27:09.447Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:27:09.447Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:27:09.447Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:27:09.451Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:27:09.451Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 14:35:30.403 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/rules-configmap-reloader container exited with code 2 (Error): 2020/09/19 14:27:10 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 19 14:35:30.403 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/prometheus-proxy container exited with code 2 (Error): 2020/09/19 14:27:11 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 14:27:11 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 14:27:11 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 14:27:11 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 14:27:11 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 14:27:11 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/19 14:27:11 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 14:27:11 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/19 14:27:11 http.go:107: HTTPS: listening on [::]:9091\nI0919 14:27:11.948106       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 14:31:20 oauthproxy.go:774: basicauth: 10.128.2.34:52430 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 19 14:35:30.403 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-19T14:27:10.558607644Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-09-19T14:27:10.561619163Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-19T14:27:15.861964766Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-09-19T14:27:15.862038793Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Sep 19 14:35:30.469 E ns/openshift-monitoring pod/telemeter-client-8f97d74dc-pp9dv node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/telemeter-client container exited with code 2 (Error): 
Sep 19 14:35:30.469 E ns/openshift-monitoring pod/telemeter-client-8f97d74dc-pp9dv node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/reload container exited with code 2 (Error): 
Sep 19 14:35:30.516 E ns/openshift-monitoring pod/thanos-querier-5f9688686d-zrj6b node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/oauth-proxy container exited with code 2 (Error): 9 14:13:24 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 14:13:24 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/19 14:13:24 http.go:107: HTTPS: listening on [::]:9091\nI0919 14:13:24.704464       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 14:14:13 oauthproxy.go:774: basicauth: 10.130.0.56:53464 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:17:13 oauthproxy.go:774: basicauth: 10.130.0.56:55986 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:18:16 oauthproxy.go:774: basicauth: 10.130.0.56:33992 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:21:13 oauthproxy.go:774: basicauth: 10.130.0.56:37960 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:23:13 oauthproxy.go:774: basicauth: 10.130.0.56:39628 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:26:13 oauthproxy.go:774: basicauth: 10.130.0.56:41928 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:29:14 oauthproxy.go:774: basicauth: 10.130.0.56:49586 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:31:17 oauthproxy.go:774: basicauth: 10.128.0.22:50042 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:32:27 oauthproxy.go:774: basicauth: 10.128.0.22:54774 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:34:22 oauthproxy.go:774: basicauth: 10.128.0.22:59044 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 14:35:22 oauthproxy.go:774: basicauth: 10.128.0.22:43818 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 19 14:35:38.901 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7bmybqjy-a21cb-jrng4-worker-c-txp2l container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T14:35:37.177Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T14:35:37.181Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T14:35:37.186Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T14:35:37.187Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T14:35:37.187Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T14:35:37.187Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T14:35:37.187Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T14:35:37.187Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T14:35:37.187Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T14:35:37.188Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T14:35:37.188Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T14:35:37.188Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T14:35:37.188Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T14:35:37.188Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T14:35:37.191Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T14:35:37.191Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 14:35:48.715 E ns/openshift-sdn pod/sdn-pnktj node/ci-op-7bmybqjy-a21cb-jrng4-master-2 container/sdn container exited with code 255 (Error): ry short watch: runtime/asm_amd64.s:1357: Unexpected watch close - watch lasted less than a second and no items received\nI0919 14:34:47.295819  111392 reflector.go:181] Stopping reflector *v1.NetNamespace (30m0s) from runtime/asm_amd64.s:1357\nW0919 14:34:47.295928  111392 reflector.go:404] runtime/asm_amd64.s:1357: watch of *v1.HostSubnet ended with: very short watch: runtime/asm_amd64.s:1357: Unexpected watch close - watch lasted less than a second and no items received\nI0919 14:34:47.295951  111392 reflector.go:181] Stopping reflector *v1.HostSubnet (30m0s) from runtime/asm_amd64.s:1357\nI0919 14:34:47.296016  111392 reflector.go:181] Stopping reflector *v1.NetworkPolicy (30s) from runtime/asm_amd64.s:1357\nW0919 14:34:47.296138  111392 reflector.go:404] runtime/asm_amd64.s:1357: watch of *v1.Service ended with: very short watch: runtime/asm_amd64.s:1357: Unexpected watch close - watch lasted less than a second and no items received\nI0919 14:34:47.296360  111392 reflector.go:181] Stopping reflector *v1.Service (30s) from runtime/asm_amd64.s:1357\nI0919 14:34:47.296180  111392 reflector.go:181] Stopping reflector *v1.EgressNetworkPolicy (30m0s) from runtime/asm_amd64.s:1357\nI0919 14:34:47.296285  111392 reflector.go:181] Stopping reflector *v1.Pod (30s) from runtime/asm_amd64.s:1357\nI0919 14:35:48.396756    2867 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0919 14:35:48.402858    2867 feature_gate.go:243] feature gates: &{map[]}\nI0919 14:35:48.402926    2867 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0919 14:35:48.403022    2867 cmd.go:216] Watching config file /config/..2020_09_19_14_17_03.724628846/kube-proxy-config.yaml for changes\nF0919 14:35:48.450752    2867 cmd.go:106] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: Get https://api-int.ci-op-7bmybqjy-a21cb.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/clusternetworks/default: dial tcp 10.0.0.2:6443: connect: connection refused\n
Sep 19 14:35:56.552 E ns/e2e-k8s-sig-apps-job-upgrade-4261 pod/foo-m58qp node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/c container exited with code 137 (Error): 
Sep 19 14:35:56.584 E ns/e2e-k8s-sig-apps-job-upgrade-4261 pod/foo-mmmgx node/ci-op-7bmybqjy-a21cb-jrng4-worker-d-958rk container/c container exited with code 137 (Error): 
Sep 19 14:36:02.565 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver