ResultSUCCESS
Tests 3 failed / 29 succeeded
Started2020-09-12 00:13
Elapsed1h22m
Work namespaceci-op-78nxcwr1
Refs release-4.5:be2b8d97
442:f24fcc29
podc00282ad-f48c-11ea-b188-0a580a800cf0
repoopenshift/cluster-etcd-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 31m44s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 33s of 28m55s (2%):

Sep 12 00:59:29.707 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 00:59:29.711 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:00:05.707 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:00:05.709 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:00:12.759 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:00:12.762 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:11:40.706 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:11:40.709 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:12:26.706 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:12:27.706 - 18s   E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service is not responding to GET requests over new connections
Sep 12 01:12:46.709 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:14:26.706 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:14:27.706 - 1s    E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service is not responding to GET requests over new connections
Sep 12 01:14:29.716 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:15:25.707 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests on reused connections
Sep 12 01:15:25.715 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests on reused connections
Sep 12 01:16:19.811 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:16:19.812 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:16:32.706 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:16:32.710 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:16:36.770 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:16:36.775 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:16:47.706 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:16:47.710 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:19:33.706 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:19:33.710 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
Sep 12 01:19:49.707 E ns/e2e-k8s-service-lb-available-3976 svc/service-test Service stopped responding to GET requests over new connections
Sep 12 01:19:49.711 I ns/e2e-k8s-service-lb-available-3976 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1599873960.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 30m44s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 1m54s of 30m43s (6%):

Sep 12 00:56:16.411 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 12 00:56:17.411 - 10s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 12 00:56:17.412 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 12 00:56:18.411 - 6s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 12 00:56:18.411 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 12 00:56:18.464 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 12 00:56:24.453 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 12 00:56:27.602 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 12 01:10:59.411 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 12 01:10:59.411 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 12 01:10:59.481 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 12 01:11:00.411 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 12 01:11:00.411 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 12 01:11:00.446 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 12 01:11:09.425 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 12 01:11:10.411 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 12 01:11:10.477 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 12 01:11:15.411 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 12 01:11:15.447 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 12 01:11:20.411 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 12 01:11:21.411 - 6s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 12 01:11:22.411 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 12 01:11:23.411 E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 12 01:11:23.541 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 12 01:11:27.451 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 12 01:13:34.411 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 12 01:13:34.411 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 12 01:13:34.457 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 12 01:13:35.411 - 9s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 12 01:13:40.411 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 12 01:13:40.475 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 12 01:13:44.448 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 12 01:13:45.411 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 12 01:13:46.411 - 22s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 12 01:13:55.411 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 12 01:13:55.447 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 12 01:14:00.411 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 12 01:14:00.447 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 12 01:14:06.411 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 12 01:14:07.411 - 2s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 12 01:14:08.441 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 12 01:14:09.446 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 12 01:18:00.411 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 12 01:18:01.411 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 12 01:18:05.411 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 12 01:18:06.411 - 8s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 12 01:18:10.411 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 12 01:18:10.435 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 12 01:18:11.411 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 12 01:18:15.437 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 12 01:18:20.437 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 12 01:18:21.411 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 12 01:18:22.411 - 1s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 12 01:18:24.452 I ns/openshift-console route/console Route started responding to GET requests over new connections
				from junit_upgrade_1599873960.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 36m5s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
84 error level events were detected during this test run:

Sep 12 00:51:52.648 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 586)
Sep 12 00:53:38.064 E kube-apiserver Kube API started failing: Get https://api.ci-op-78nxcwr1-eabca.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 12 00:55:19.340 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-78bc67ffc6-86zc4 node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/kube-storage-version-migrator-operator container exited with code 1 (Error): ditions":[{"lastTransitionTime":"2020-09-12T00:22:23Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-12T00:53:40Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-12T00:33:14Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-12T00:22:23Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0912 00:53:40.041866       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"d8f30a35-012c-4f6e-a125-8fc2d440e747", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "TargetDegraded: \"kube-storage-version-migrator/roles.yaml\" (string): etcdserver: leader changed\nTargetDegraded: " to "",Progressing changed from True to False ("")\nI0912 00:55:18.229813       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0912 00:55:18.230229       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from runtime/asm_amd64.s:1357\nI0912 00:55:18.230289       1 reflector.go:181] Stopping reflector *unstructured.Unstructured (12h0m0s) from runtime/asm_amd64.s:1357\nI0912 00:55:18.230335       1 reflector.go:181] Stopping reflector *v1.Deployment (10m0s) from runtime/asm_amd64.s:1357\nI0912 00:55:18.230385       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from runtime/asm_amd64.s:1357\nI0912 00:55:18.230416       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0912 00:55:18.230436       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nI0912 00:55:18.230452       1 base_controller.go:101] Shutting down LoggingSyncer ...\nW0912 00:55:18.230520       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
Sep 12 00:55:31.398 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-78nxcwr1-eabca-cqthj-master-0 node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/cluster-policy-controller container exited with code 255 (Error):  1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ImageStream: Get https://localhost:6443/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=28198&timeout=7m6s&timeoutSeconds=426&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:55:27.007251       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=25926&timeout=9m27s&timeoutSeconds=567&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:55:27.009653       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ServiceAccount: Get https://localhost:6443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=25914&timeout=8m8s&timeoutSeconds=488&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:55:27.011057       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Route: Get https://localhost:6443/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=28196&timeout=9m35s&timeoutSeconds=575&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:55:27.012009       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=25915&timeout=9m6s&timeoutSeconds=546&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:55:27.013678       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.DeploymentConfig: Get https://localhost:6443/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=28196&timeout=7m45s&timeoutSeconds=465&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0912 00:55:31.014787       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0912 00:55:31.014956       1 policy_controller.go:94] leaderelection lost\n
Sep 12 00:55:39.297 E ns/openshift-kube-storage-version-migrator pod/migrator-6cd8c9d666-lbzh7 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/migrator container exited with code 2 (Error): I0912 00:45:22.406151       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0912 00:51:34.061263       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Sep 12 00:55:41.390 E ns/openshift-cluster-machine-approver pod/machine-approver-5b659d698c-6znkx node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/machine-approver-controller container exited with code 2 (Error): ieve current serving cert: remote error: tls: internal error\nI0912 00:31:47.850132       1 csr_check.go:183] Falling back to machine-api authorization for ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl\nI0912 00:31:47.885153       1 main.go:197] CSR csr-99sr4 approved\nE0912 00:49:42.555841       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=20534&timeoutSeconds=361&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0912 00:49:42.555962       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=21056&timeoutSeconds=525&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0912 00:49:43.564257       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=20534&timeoutSeconds=559&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0912 00:49:43.564862       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=21056&timeoutSeconds=509&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0912 00:49:49.956022       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: the server is currently unable to handle the request (get clusteroperators.config.openshift.io)\n
Sep 12 00:55:48.599 E ns/openshift-insights pod/insights-operator-9d8b748fd-rsxsh node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/operator container exited with code 2 (Error): cret\nI0912 00:53:51.669638       1 configobserver.go:231] Configuration updated: enabled=true endpoint=https://cloud.redhat.com/api/ingress/v1/upload interval=10m0s username=false token=true\nI0912 00:53:51.669829       1 periodic.go:125] Gathering cluster info every 10m0s\nI0912 00:53:51.669961       1 insightsuploader.go:122] Nothing to report since 2020-09-12T00:29:06Z\nI0912 00:53:59.161527       1 httplog.go:90] GET /metrics: (7.167323ms) 200 [Prometheus/2.15.2 10.129.2.9:33668]\nI0912 00:54:06.670830       1 insightsuploader.go:122] Nothing to report since 2020-09-12T00:29:06Z\nI0912 00:54:08.883719       1 httplog.go:90] GET /metrics: (2.445533ms) 200 [Prometheus/2.15.2 10.128.2.9:39298]\nI0912 00:54:21.671244       1 insightsuploader.go:122] Nothing to report since 2020-09-12T00:29:06Z\nI0912 00:54:29.183054       1 httplog.go:90] GET /metrics: (27.907086ms) 200 [Prometheus/2.15.2 10.129.2.9:33668]\nI0912 00:54:36.671651       1 insightsuploader.go:122] Nothing to report since 2020-09-12T00:29:06Z\nI0912 00:54:38.883984       1 httplog.go:90] GET /metrics: (2.561432ms) 200 [Prometheus/2.15.2 10.128.2.9:39298]\nI0912 00:54:51.644354       1 status.go:314] The operator is healthy\nI0912 00:54:51.672012       1 insightsuploader.go:122] Nothing to report since 2020-09-12T00:29:06Z\nI0912 00:54:59.163763       1 httplog.go:90] GET /metrics: (9.210896ms) 200 [Prometheus/2.15.2 10.129.2.9:33668]\nI0912 00:55:06.672341       1 insightsuploader.go:122] Nothing to report since 2020-09-12T00:29:06Z\nI0912 00:55:08.882972       1 httplog.go:90] GET /metrics: (1.813316ms) 200 [Prometheus/2.15.2 10.128.2.9:39298]\nI0912 00:55:21.672658       1 insightsuploader.go:122] Nothing to report since 2020-09-12T00:29:06Z\nI0912 00:55:29.166857       1 httplog.go:90] GET /metrics: (12.448862ms) 200 [Prometheus/2.15.2 10.129.2.9:33668]\nI0912 00:55:36.672997       1 insightsuploader.go:122] Nothing to report since 2020-09-12T00:29:06Z\nI0912 00:55:38.883575       1 httplog.go:90] GET /metrics: (2.464093ms) 200 [Prometheus/2.15.2 10.128.2.9:39298]\n
Sep 12 00:55:51.341 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-778b66d566-hmfmf node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/operator container exited with code 255 (Error): 0912 00:55:43.503260       1 operator.go:146] Starting syncing operator at 2020-09-12 00:55:43.503249497 +0000 UTC m=+1349.055304942\nI0912 00:55:43.546326       1 operator.go:148] Finished syncing operator at 43.066816ms\nI0912 00:55:43.546498       1 operator.go:146] Starting syncing operator at 2020-09-12 00:55:43.54649139 +0000 UTC m=+1349.098546836\nI0912 00:55:43.591700       1 operator.go:148] Finished syncing operator at 45.200873ms\nI0912 00:55:43.591754       1 operator.go:146] Starting syncing operator at 2020-09-12 00:55:43.591750041 +0000 UTC m=+1349.143805477\nI0912 00:55:43.651335       1 operator.go:148] Finished syncing operator at 59.577975ms\nI0912 00:55:43.651414       1 operator.go:146] Starting syncing operator at 2020-09-12 00:55:43.651409775 +0000 UTC m=+1349.203465232\nI0912 00:55:43.918073       1 operator.go:148] Finished syncing operator at 266.655098ms\nI0912 00:55:44.484722       1 operator.go:146] Starting syncing operator at 2020-09-12 00:55:44.484713353 +0000 UTC m=+1350.036768789\nI0912 00:55:44.563750       1 operator.go:148] Finished syncing operator at 79.027765ms\nI0912 00:55:44.569651       1 operator.go:146] Starting syncing operator at 2020-09-12 00:55:44.569640787 +0000 UTC m=+1350.121696233\nI0912 00:55:45.123850       1 operator.go:148] Finished syncing operator at 554.202041ms\nI0912 00:55:50.734774       1 operator.go:146] Starting syncing operator at 2020-09-12 00:55:50.734764628 +0000 UTC m=+1356.286820076\nI0912 00:55:50.769102       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0912 00:55:50.769508       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0912 00:55:50.769683       1 logging_controller.go:93] Shutting down LogLevelController\nI0912 00:55:50.769708       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0912 00:55:50.769720       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nF0912 00:55:50.769786       1 builder.go:243] stopped\n
Sep 12 00:56:14.909 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s container/config-reloader container exited with code 2 (Error): 2020/09/12 00:33:29 Watching directory: "/etc/alertmanager/config"\n
Sep 12 00:56:15.524 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/config-reloader container exited with code 2 (Error): 2020/09/12 00:56:09 Watching directory: "/etc/alertmanager/config"\n
Sep 12 00:56:15.524 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/alertmanager-proxy container exited with code 2 (Error): 2020/09/12 00:56:10 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/12 00:56:10 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/12 00:56:10 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/12 00:56:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/12 00:56:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/12 00:56:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/12 00:56:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0912 00:56:10.565432       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/12 00:56:10 http.go:107: HTTPS: listening on [::]:9095\n
Sep 12 00:56:29.903 E ns/openshift-monitoring pod/node-exporter-q579d node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/node-exporter container exited with code 143 (Error): -12T00:32:42Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-12T00:32:42Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 12 00:56:30.941 E ns/openshift-monitoring pod/grafana-f4dbfc96-tldqj node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/grafana container exited with code 1 (Error): 
Sep 12 00:56:30.941 E ns/openshift-monitoring pod/grafana-f4dbfc96-tldqj node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/grafana-proxy container exited with code 2 (Error): 
Sep 12 00:56:33.914 E ns/openshift-monitoring pod/thanos-querier-585c975d5c-g9t5l node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/oauth-proxy container exited with code 2 (Error): "^/metrics"\n2020/09/12 00:39:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/12 00:39:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/12 00:39:01 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/12 00:39:01 http.go:107: HTTPS: listening on [::]:9091\nI0912 00:39:01.516841       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/12 00:40:36 oauthproxy.go:774: basicauth: 10.128.0.5:51916 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 00:42:36 oauthproxy.go:774: basicauth: 10.128.0.5:53368 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 00:43:36 oauthproxy.go:774: basicauth: 10.128.0.5:54056 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 00:47:36 oauthproxy.go:774: basicauth: 10.128.0.5:36386 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 00:47:36 oauthproxy.go:774: basicauth: 10.128.0.5:36386 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 00:48:40 oauthproxy.go:774: basicauth: 10.128.0.5:37174 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 00:48:40 oauthproxy.go:774: basicauth: 10.128.0.5:37174 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 00:50:36 oauthproxy.go:774: basicauth: 10.128.0.5:40230 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 00:52:29 oauthproxy.go:774: basicauth: 10.129.0.88:41454 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 00:56:29 oauthproxy.go:774: basicauth: 10.129.0.88:33722 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 12 00:56:40.588 E ns/openshift-monitoring pod/prometheus-adapter-9dc44c8ff-68gz7 node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/prometheus-adapter container exited with code 2 (Error): I0912 00:39:01.256280       1 adapter.go:94] successfully using in-cluster auth\nI0912 00:39:02.275019       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0912 00:39:02.275074       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0912 00:39:02.275263       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0912 00:39:02.276113       1 secure_serving.go:178] Serving securely on [::]:6443\nI0912 00:39:02.276149       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Sep 12 00:56:49.034 E ns/openshift-console-operator pod/console-operator-69f6b9bb6b-xqcwl node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/console-operator container exited with code 1 (Error): ; INTERNAL_ERROR") has prevented the request from succeeding\nW0912 00:54:18.407951       1 reflector.go:404] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 85; INTERNAL_ERROR") has prevented the request from succeeding\nI0912 00:56:48.244359       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0912 00:56:48.248391       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0912 00:56:48.248507       1 base_controller.go:101] Shutting down LoggingSyncer ...\nI0912 00:56:48.248548       1 base_controller.go:101] Shutting down ManagementStateController ...\nI0912 00:56:48.248577       1 base_controller.go:101] Shutting down ResourceSyncController ...\nI0912 00:56:48.248577       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0912 00:56:48.248588       1 base_controller.go:101] Shutting down UnsupportedConfigOverridesController ...\nI0912 00:56:48.248627       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0912 00:56:48.248632       1 reflector.go:181] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0912 00:56:48.248640       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0912 00:56:48.248649       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0912 00:56:48.248669       1 reflector.go:181] Stopping reflector *v1.OAuthClient (10m0s) from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101\nW0912 00:56:48.248797       1 builder.go:96] graceful termination failed, controllers failed with error: stopped\n
Sep 12 00:56:50.052 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-12T00:56:46.689Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-12T00:56:46.693Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-12T00:56:46.694Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-12T00:56:46.694Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-12T00:56:46.695Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-12T00:56:46.695Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-12T00:56:46.695Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-12T00:56:46.695Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-12T00:56:46.695Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-12T00:56:46.695Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-12T00:56:46.695Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-12T00:56:46.695Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-12T00:56:46.695Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-12T00:56:46.695Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-12T00:56:46.697Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-12T00:56:46.697Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-12
Sep 12 00:57:09.127 E ns/openshift-monitoring pod/node-exporter-xq8zv node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/node-exporter container exited with code 143 (Error): -12T00:29:06Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-12T00:29:06Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 12 00:57:10.718 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-12T00:57:08.202Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-12T00:57:08.207Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-12T00:57:08.207Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-12T00:57:08.208Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-12T00:57:08.208Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-12T00:57:08.208Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-12T00:57:08.208Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-12T00:57:08.208Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-12T00:57:08.208Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-12T00:57:08.208Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-12T00:57:08.209Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-12T00:57:08.209Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-12T00:57:08.209Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-12T00:57:08.209Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-12T00:57:08.211Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-12T00:57:08.211Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-12
Sep 12 00:57:39.218 E ns/openshift-console pod/console-778d6c7d7-nghxl node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/console container exited with code 2 (Error): 12T00:44:12Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:44:22Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:44:32Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:44:42Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:44:52Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:02Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:12Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:22Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:32Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:42Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:52Z cmd/main: Binding to [::]:8443...\n2020-09-12T00:45:52Z cmd/main: using TLS\n
Sep 12 00:57:41.339 E ns/openshift-marketplace pod/community-operators-9cd4866f-z8sw9 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/community-operators container exited with code 2 (Error): 
Sep 12 00:57:48.203 E ns/openshift-console pod/console-778d6c7d7-g8nfx node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/console container exited with code 2 (Error): 2020-09-12T00:44:13Z cmd/main: cookies are secure!\n2020-09-12T00:44:13Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:44:23Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:44:33Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:44:43Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:44:53Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:03Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:13Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:23Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:33Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-09-12T00:45:43Z cmd/main: Binding to [::]:8443...\n2020-09-12T00:45:43Z cmd/main: using TLS\n
Sep 12 00:59:13.690 E ns/openshift-sdn pod/sdn-controller-dltw4 node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/sdn-controller container exited with code 2 (Error): I0912 00:21:50.759537       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0912 00:41:35.574325       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-78nxcwr1-eabca.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: read tcp 10.0.0.5:47882->10.0.0.2:6443: read: connection timed out\n
Sep 12 00:59:20.663 E ns/openshift-sdn pod/sdn-controller-4j7lh node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/sdn-controller container exited with code 2 (Error):    1 subnets.go:212] Cleared node NetworkUnavailable/NoRouteCreated condition for ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s\nI0912 00:31:42.265412       1 subnets.go:150] Created HostSubnet ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s (host: "ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s", ip: "10.0.32.2", subnet: "10.128.2.0/23")\nI0912 00:31:47.687279       1 subnets.go:212] Cleared node NetworkUnavailable/NoRouteCreated condition for ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl\nI0912 00:31:47.734570       1 subnets.go:150] Created HostSubnet ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl (host: "ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl", ip: "10.0.32.3", subnet: "10.129.2.0/23")\nI0912 00:49:56.652269       1 vnids.go:116] Allocated netid 158016 for namespace "e2e-k8s-sig-apps-job-upgrade-1544"\nI0912 00:49:56.691104       1 vnids.go:116] Allocated netid 1653135 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-9275"\nI0912 00:49:56.706211       1 vnids.go:116] Allocated netid 7574755 for namespace "e2e-k8s-sig-apps-deployment-upgrade-6077"\nI0912 00:49:56.748274       1 vnids.go:116] Allocated netid 1731684 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-8659"\nI0912 00:49:56.763954       1 vnids.go:116] Allocated netid 16334121 for namespace "e2e-k8s-service-lb-available-3976"\nI0912 00:49:56.811156       1 vnids.go:116] Allocated netid 5706233 for namespace "e2e-kubernetes-api-available-2698"\nI0912 00:49:56.846418       1 vnids.go:116] Allocated netid 1292316 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-8095"\nI0912 00:49:56.891665       1 vnids.go:116] Allocated netid 2734746 for namespace "e2e-openshift-api-available-3364"\nI0912 00:49:56.907459       1 vnids.go:116] Allocated netid 5578085 for namespace "e2e-check-for-critical-alerts-7204"\nI0912 00:49:56.933294       1 vnids.go:116] Allocated netid 12935443 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-6035"\nI0912 00:49:56.958215       1 vnids.go:116] Allocated netid 14446935 for namespace "e2e-frontend-ingress-available-5568"\n
Sep 12 00:59:23.683 E ns/openshift-sdn pod/sdn-6lkch node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/sdn container exited with code 255 (Error): 2 got IP 10.130.0.75, ofport 76\nI0912 00:58:09.744584    2758 pod.go:541] CNI_DEL openshift-kube-apiserver/revision-pruner-7-ci-op-78nxcwr1-eabca-cqthj-master-2\nI0912 00:58:37.228162    2758 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.0.5:6443 10.0.0.6:6443]\nI0912 00:58:37.228203    2758 roundrobin.go:217] Delete endpoint 10.0.0.3:6443 for service "openshift-kube-apiserver/apiserver:https"\nI0912 00:58:37.345731    2758 roundrobin.go:267] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.0.5:6443 10.0.0.6:6443]\nI0912 00:58:37.345928    2758 roundrobin.go:217] Delete endpoint 10.0.0.3:6443 for service "default/kubernetes:https"\nI0912 00:58:37.429083    2758 proxier.go:370] userspace proxy: processing 0 service events\nI0912 00:58:37.429864    2758 proxier.go:349] userspace syncProxyRules took 54.83694ms\nI0912 00:58:37.590886    2758 proxier.go:370] userspace proxy: processing 0 service events\nI0912 00:58:37.591554    2758 proxier.go:349] userspace syncProxyRules took 41.159476ms\nI0912 00:59:10.851167    2758 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.11:6443 10.129.0.7:6443]\nI0912 00:59:10.851357    2758 roundrobin.go:217] Delete endpoint 10.130.0.5:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0912 00:59:10.851446    2758 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.11:8443 10.129.0.7:8443]\nI0912 00:59:10.851492    2758 roundrobin.go:217] Delete endpoint 10.130.0.5:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0912 00:59:11.055588    2758 proxier.go:370] userspace proxy: processing 0 service events\nI0912 00:59:11.056366    2758 proxier.go:349] userspace syncProxyRules took 51.369228ms\nF0912 00:59:23.426944    2758 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 12 00:59:29.808 E ns/openshift-sdn pod/sdn-controller-cdzpd node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/sdn-controller container exited with code 2 (Error): I0912 00:21:51.092638       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Sep 12 00:59:41.794 E ns/openshift-multus pod/multus-admission-controller-knbsf node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/multus-admission-controller container exited with code 137 (Error): 
Sep 12 00:59:41.836 E ns/openshift-multus pod/multus-shrbn node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/kube-multus container exited with code 137 (Error): 
Sep 12 00:59:56.773 E ns/openshift-sdn pod/sdn-c9f29 node/ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s container/sdn container exited with code 255 (Error): 64    2460 proxier.go:370] userspace proxy: processing 0 service events\nI0912 00:57:51.520517    2460 proxier.go:349] userspace syncProxyRules took 37.937053ms\nI0912 00:58:37.228460    2460 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.0.5:6443 10.0.0.6:6443]\nI0912 00:58:37.228507    2460 roundrobin.go:217] Delete endpoint 10.0.0.3:6443 for service "openshift-kube-apiserver/apiserver:https"\nI0912 00:58:37.347286    2460 roundrobin.go:267] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.0.5:6443 10.0.0.6:6443]\nI0912 00:58:37.347328    2460 roundrobin.go:217] Delete endpoint 10.0.0.3:6443 for service "default/kubernetes:https"\nI0912 00:58:37.374876    2460 proxier.go:370] userspace proxy: processing 0 service events\nI0912 00:58:37.375378    2460 proxier.go:349] userspace syncProxyRules took 34.776997ms\nI0912 00:58:37.527930    2460 proxier.go:370] userspace proxy: processing 0 service events\nI0912 00:58:37.528442    2460 proxier.go:349] userspace syncProxyRules took 35.226435ms\nI0912 00:59:10.861060    2460 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.11:6443 10.129.0.7:6443]\nI0912 00:59:10.861097    2460 roundrobin.go:217] Delete endpoint 10.130.0.5:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0912 00:59:10.861114    2460 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.11:8443 10.129.0.7:8443]\nI0912 00:59:10.861123    2460 roundrobin.go:217] Delete endpoint 10.130.0.5:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0912 00:59:11.022462    2460 proxier.go:370] userspace proxy: processing 0 service events\nI0912 00:59:11.023005    2460 proxier.go:349] userspace syncProxyRules took 33.898667ms\nF0912 00:59:56.372749    2460 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 12 00:59:58.956 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ci-op-78nxcwr1-eabca-cqthj-master-1 node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/kube-controller-manager container exited with code 255 (Error): meout=9m0s&timeoutSeconds=540&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:59:57.585665       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:6443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=24617&timeout=7m6s&timeoutSeconds=426&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:59:57.586655       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=24617&timeout=9m4s&timeoutSeconds=544&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:59:57.587705       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machineconfiguration.openshift.io/v1/kubeletconfigs?allowWatchBookmarks=true&resourceVersion=28415&timeout=5m36s&timeoutSeconds=336&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:59:57.589229       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/dnses?allowWatchBookmarks=true&resourceVersion=28413&timeout=9m7s&timeoutSeconds=547&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:59:57.590173       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=31191&timeout=8m13s&timeoutSeconds=493&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0912 00:59:58.067780       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0912 00:59:58.067908       1 controllermanager.go:291] leaderelection lost\n
Sep 12 00:59:59.950 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ci-op-78nxcwr1-eabca-cqthj-master-1 node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/kube-scheduler container exited with code 255 (Error): watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=24618&timeout=9m33s&timeoutSeconds=573&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:59:58.383338       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=24612&timeout=9m59s&timeoutSeconds=599&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:59:58.385026       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=24612&timeout=5m5s&timeoutSeconds=305&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:59:58.386168       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=33956&timeout=7m29s&timeoutSeconds=449&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:59:58.387180       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=31191&timeout=9m35s&timeoutSeconds=575&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0912 00:59:58.388776       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=31555&timeout=6m54s&timeoutSeconds=414&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0912 00:59:58.880418       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0912 00:59:58.880457       1 server.go:244] leaderelection lost\n
Sep 12 01:00:24.035 E ns/openshift-kube-apiserver pod/kube-apiserver-ci-op-78nxcwr1-eabca-cqthj-master-1 node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/setup init container exited with code 124 (Error): ................................................................................
Sep 12 01:00:50.912 E ns/openshift-sdn pod/sdn-c9f29 node/ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s container/sdn container exited with code 137 (Error): tarted Kubernetes Proxy on 0.0.0.0\nI0912 01:00:22.481957   71999 cmd.go:170] openshift-sdn network plugin waiting for proxy startup to complete\nI0912 01:00:22.482046   71999 reflector.go:175] Starting reflector *v1.EgressNetworkPolicy (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482059   71999 reflector.go:181] Stopping reflector *v1.EgressNetworkPolicy (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482162   71999 reflector.go:175] Starting reflector *v1.Namespace (30s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482326   71999 reflector.go:181] Stopping reflector *v1.Namespace (30s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482332   71999 reflector.go:175] Starting reflector *v1.NetNamespace (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482349   71999 reflector.go:175] Starting reflector *v1.Endpoints (30s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482359   71999 reflector.go:181] Stopping reflector *v1.Endpoints (30s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482364   71999 reflector.go:181] Stopping reflector *v1.NetNamespace (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482399   71999 reflector.go:175] Starting reflector *v1.Pod (30s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482402   71999 reflector.go:175] Starting reflector *v1.HostSubnet (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482406   71999 reflector.go:181] Stopping reflector *v1.Pod (30s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482410   71999 reflector.go:181] Stopping reflector *v1.HostSubnet (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482451   71999 reflector.go:175] Starting reflector *v1.NetworkPolicy (30s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482463   71999 reflector.go:181] Stopping reflector *v1.NetworkPolicy (30s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482520   71999 reflector.go:175] Starting reflector *v1.Service (30s) from runtime/asm_amd64.s:1357\nI0912 01:00:22.482529   71999 reflector.go:181] Stopping reflector *v1.Service (30s) from runtime/asm_amd64.s:1357\n
Sep 12 01:00:51.085 E ns/openshift-multus pod/multus-84dxx node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/kube-multus container exited with code 137 (Error): 
Sep 12 01:01:32.034 E ns/openshift-multus pod/multus-tvgrg node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/kube-multus container exited with code 137 (Error): 
Sep 12 01:01:48.355 E ns/openshift-sdn pod/sdn-25pbr node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/sdn container exited with code 255 (Error): 1:07.292168  116992 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.66:8443 10.130.0.76:8443]\nI0912 01:01:07.292215  116992 roundrobin.go:217] Delete endpoint 10.129.0.7:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0912 01:01:07.485665  116992 proxier.go:370] userspace proxy: processing 0 service events\nI0912 01:01:07.486775  116992 proxier.go:349] userspace syncProxyRules took 52.090533ms\nI0912 01:01:07.655455  116992 proxier.go:370] userspace proxy: processing 0 service events\nI0912 01:01:07.656099  116992 proxier.go:349] userspace syncProxyRules took 35.030026ms\nI0912 01:01:37.582166  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:38.090961  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:38.721852  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:39.509320  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:40.491772  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:41.718622  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:43.250675  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:45.164142  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0912 01:01:47.418732  116992 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 12 01:02:22.125 E ns/openshift-multus pod/multus-bwcv8 node/ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s container/kube-multus container exited with code 137 (Error): 
Sep 12 01:03:06.837 E ns/openshift-multus pod/multus-x8vgd node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/kube-multus container exited with code 137 (Error): 
Sep 12 01:03:50.894 E ns/openshift-multus pod/multus-l7qfb node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/kube-multus container exited with code 137 (Error): 
Sep 12 01:05:37.130 E ns/openshift-machine-config-operator pod/machine-config-operator-86d7f8d5cb-mhlcf node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/machine-config-operator container exited with code 2 (Error): fig", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"469b938c-ca7d-4767-bca4-8a87b91662da", ResourceVersion:"12029", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735466943, loc:(*time.Location)(0x25205a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-86d7f8d5cb-mhlcf_30cb9b77-4b0d-4235-84da-2374994ff9d7\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-09-12T00:28:52Z\",\"renewTime\":\"2020-09-12T00:28:52Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"machine-config-operator", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000232c60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000232ca0)}}}, Immutable:(*bool)(nil), Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-86d7f8d5cb-mhlcf_30cb9b77-4b0d-4235-84da-2374994ff9d7 became leader'\nI0912 00:28:52.655550       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0912 00:28:53.300560       1 operator.go:265] Starting MachineConfigOperator\nI0912 00:30:22.119169       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"e3ba7598-eb3a-49b2-9ab5-ab99532c5400", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator version changed from [] to [{operator 4.5.0-0.ci.test-2020-09-12-001354-ci-op-78nxcwr1}]\n
Sep 12 01:07:40.898 E ns/openshift-machine-config-operator pod/machine-config-daemon-tdzbw node/ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s container/oauth-proxy container exited with code 143 (Error): 
Sep 12 01:07:53.058 E ns/openshift-machine-config-operator pod/machine-config-daemon-rp9ct node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/oauth-proxy container exited with code 143 (Error): 
Sep 12 01:07:57.530 E ns/openshift-machine-config-operator pod/machine-config-daemon-xpfbf node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/oauth-proxy container exited with code 143 (Error): 
Sep 12 01:08:07.725 E ns/openshift-machine-config-operator pod/machine-config-daemon-bg8pz node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/oauth-proxy container exited with code 143 (Error): 
Sep 12 01:08:40.222 E ns/openshift-machine-config-operator pod/machine-config-controller-765f78cdbf-scqks node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/machine-config-controller container exited with code 2 (Error): 8188c18d62530a59dc3c4af1\nI0912 00:32:38.158457       1 node_controller.go:453] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-819a27e28188c18d62530a59dc3c4af1\nI0912 00:32:38.158466       1 node_controller.go:453] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s changed machineconfiguration.openshift.io/state = Done\nI0912 00:32:38.936805       1 node_controller.go:453] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb changed machineconfiguration.openshift.io/currentConfig = rendered-worker-819a27e28188c18d62530a59dc3c4af1\nI0912 00:32:38.936835       1 node_controller.go:453] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-819a27e28188c18d62530a59dc3c4af1\nI0912 00:32:38.936856       1 node_controller.go:453] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb changed machineconfiguration.openshift.io/state = Done\nI0912 00:32:46.132921       1 node_controller.go:453] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl changed machineconfiguration.openshift.io/currentConfig = rendered-worker-819a27e28188c18d62530a59dc3c4af1\nI0912 00:32:46.132945       1 node_controller.go:453] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-819a27e28188c18d62530a59dc3c4af1\nI0912 00:32:46.132957       1 node_controller.go:453] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl changed machineconfiguration.openshift.io/state = Done\nI0912 00:32:56.595465       1 node_controller.go:436] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb is now reporting ready\nI0912 00:33:02.598809       1 node_controller.go:436] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s is now reporting ready\nI0912 00:33:08.055074       1 node_controller.go:436] Pool worker: node ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl is now reporting ready\n
Sep 12 01:10:38.286 E ns/openshift-machine-config-operator pod/machine-config-server-tbdq4 node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/machine-config-server container exited with code 2 (Error): I0912 00:29:50.070370       1 start.go:38] Version: machine-config-daemon-4.5.0-202006231303.p0-40-g08aad192-dirty (08aad1925d6e29266390ecb6f4e6730d60e44aaf)\nI0912 00:29:50.071576       1 api.go:56] Launching server on :22624\nI0912 00:29:50.071634       1 api.go:56] Launching server on :22623\nI0912 00:29:53.984272       1 api.go:102] Pool worker requested by 10.0.32.2:53904\nE0912 00:29:54.000921       1 api.go:108] couldn't get config for req: {worker}, error: could not fetch config , err: resource name may not be empty\nI0912 00:29:59.002185       1 api.go:102] Pool worker requested by 10.0.32.2:53904\n
Sep 12 01:10:50.820 E ns/openshift-machine-config-operator pod/machine-config-server-k9h7b node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/machine-config-server container exited with code 2 (Error): I0912 00:29:50.008949       1 start.go:38] Version: machine-config-daemon-4.5.0-202006231303.p0-40-g08aad192-dirty (08aad1925d6e29266390ecb6f4e6730d60e44aaf)\nI0912 00:29:50.009742       1 api.go:56] Launching server on :22624\nI0912 00:29:50.009830       1 api.go:56] Launching server on :22623\nI0912 00:29:52.399562       1 api.go:102] Pool worker requested by 10.0.32.4:57298\nE0912 00:29:52.413347       1 api.go:108] couldn't get config for req: {worker}, error: could not fetch config , err: resource name may not be empty\nI0912 00:29:57.414844       1 api.go:102] Pool worker requested by 10.0.32.4:57298\nE0912 00:29:57.420791       1 api.go:108] couldn't get config for req: {worker}, error: could not fetch config , err: resource name may not be empty\nI0912 00:30:02.422488       1 api.go:102] Pool worker requested by 10.0.32.4:57298\n
Sep 12 01:10:59.694 E ns/openshift-machine-api pod/machine-api-controllers-6c656b5798-dhhhx node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/machineset-controller container exited with code 1 (Error): 
Sep 12 01:11:05.900 E ns/openshift-machine-config-operator pod/machine-config-server-4rjrq node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/machine-config-server container exited with code 2 (Error): I0912 00:29:50.155081       1 start.go:38] Version: machine-config-daemon-4.5.0-202006231303.p0-40-g08aad192-dirty (08aad1925d6e29266390ecb6f4e6730d60e44aaf)\nI0912 00:29:50.156604       1 api.go:56] Launching server on :22624\nI0912 00:29:50.156731       1 api.go:56] Launching server on :22623\nI0912 00:29:52.433942       1 api.go:102] Pool worker requested by 10.0.32.3:35892\nE0912 00:29:52.447233       1 api.go:108] couldn't get config for req: {worker}, error: could not fetch config , err: resource name may not be empty\nI0912 00:29:57.448517       1 api.go:102] Pool worker requested by 10.0.32.3:35892\nE0912 00:29:57.452996       1 api.go:108] couldn't get config for req: {worker}, error: could not fetch config , err: resource name may not be empty\nI0912 00:30:02.454313       1 api.go:102] Pool worker requested by 10.0.32.3:35892\n
Sep 12 01:11:18.832 E ns/openshift-console pod/console-fd5754d75-jx2sq node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/console container exited with code 2 (Error): 2020-09-12T00:57:11Z cmd/main: cookies are secure!\n2020-09-12T00:57:11Z cmd/main: Binding to [::]:8443...\n2020-09-12T00:57:11Z cmd/main: using TLS\n
Sep 12 01:11:19.526 E ns/e2e-k8s-sig-apps-job-upgrade-1544 pod/foo-fl2jf node/ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s container/c container exited with code 137 (Error): 
Sep 12 01:12:26.147 E ns/openshift-sdn pod/sdn-25pbr node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/sdn container exited with code 255 (Error): s-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:38.090961  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:38.721852  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:39.509320  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:40.491772  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:41.718622  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:43.250675  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nI0912 01:01:45.164142  116992 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0912 01:01:47.418732  116992 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\nI0912 01:12:24.625831    2475 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0912 01:12:24.629840    2475 feature_gate.go:243] feature gates: &{map[]}\nI0912 01:12:24.629896    2475 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0912 01:12:24.629947    2475 cmd.go:216] Watching config file /config/..2020_09_12_00_59_27.300590351/kube-proxy-config.yaml for changes\nF0912 01:12:24.673315    2475 cmd.go:106] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: Get https://api-int.ci-op-78nxcwr1-eabca.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/clusternetworks/default: dial tcp 10.0.0.2:6443: connect: connection refused\n
Sep 12 01:12:49.199 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 12 01:12:59.698 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Sep 12 01:13:08.934 E ns/openshift-machine-config-operator pod/machine-config-operator-5c48c96df8-dpz46 node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/machine-config-operator container exited with code 2 (Error): , SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"469b938c-ca7d-4767-bca4-8a87b91662da", ResourceVersion:"37319", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63735466943, loc:(*time.Location)(0x25205a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-5c48c96df8-dpz46_95ead99f-edac-4a24-bde7-7c015f0e8edb\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-09-12T01:07:33Z\",\"renewTime\":\"2020-09-12T01:07:33Z\",\"leaderTransitions\":2}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"machine-config-operator", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0003da340), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0003da360)}}}, Immutable:(*bool)(nil), Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-5c48c96df8-dpz46_95ead99f-edac-4a24-bde7-7c015f0e8edb became leader'\nI0912 01:07:33.707247       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0912 01:07:34.342870       1 operator.go:265] Starting MachineConfigOperator\nI0912 01:07:34.348747       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"e3ba7598-eb3a-49b2-9ab5-ab99532c5400", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 4.5.0-0.ci.test-2020-09-12-001354-ci-op-78nxcwr1}] to [{operator 4.5.0-0.ci.test-2020-09-12-001515-ci-op-78nxcwr1}]\n
Sep 12 01:13:16.824 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7cd6bfb66-p8627 node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/operator container exited with code 1 (Error): 5247       1 request.go:557] Throttling request took 152.038631ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0912 01:12:31.325268       1 request.go:557] Throttling request took 196.075686ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0912 01:12:46.389345       1 httplog.go:90] verb="GET" URI="/metrics" latency=7.386243ms resp=200 UserAgent="Prometheus/2.15.2" srcIP="10.131.0.30:46180": \nI0912 01:12:50.596724       1 httplog.go:90] verb="GET" URI="/metrics" latency=2.368485ms resp=200 UserAgent="Prometheus/2.15.2" srcIP="10.129.2.26:58636": \nI0912 01:12:51.121605       1 request.go:557] Throttling request took 148.355825ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0912 01:12:51.321587       1 request.go:557] Throttling request took 195.832988ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0912 01:13:11.140822       1 request.go:557] Throttling request took 84.213412ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0912 01:13:11.338281       1 request.go:557] Throttling request took 188.92021ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0912 01:13:12.646828       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0912 01:13:12.647290       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0912 01:13:12.647368       1 builder.go:219] server exited\nW0912 01:13:12.647425       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\n
Sep 12 01:13:25.014 E ns/openshift-marketplace pod/certified-operators-554db695d7-pr7p4 node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/certified-operators container exited with code 2 (Error): 
Sep 12 01:13:26.004 E ns/openshift-monitoring pod/prometheus-adapter-6658db64fb-mmp4f node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/prometheus-adapter container exited with code 2 (Error): rting request-header::/etc/tls/private/requestheader-client-ca-file\nI0912 01:10:53.652395       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0912 01:10:53.653813       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nE0912 01:11:37.925019       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0912 01:11:37.925154       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0912 01:11:37.966698       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0912 01:11:37.966894       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0912 01:11:38.009898       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0912 01:11:38.009997       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\n
Sep 12 01:13:26.026 E ns/openshift-monitoring pod/telemeter-client-7b44fb4465-wzcz7 node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/telemeter-client container exited with code 2 (Error): 
Sep 12 01:13:26.026 E ns/openshift-monitoring pod/telemeter-client-7b44fb4465-wzcz7 node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/reload container exited with code 2 (Error): 
Sep 12 01:13:26.110 E ns/openshift-monitoring pod/thanos-querier-5c555ff885-49pcj node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/oauth-proxy container exited with code 2 (Error): 2 00:56:37 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/12 00:56:37 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0912 00:56:37.956320       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/12 00:56:37 http.go:107: HTTPS: listening on [::]:9091\n2020/09/12 00:58:29 oauthproxy.go:774: basicauth: 10.129.0.88:35842 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 00:59:29 oauthproxy.go:774: basicauth: 10.129.0.88:36524 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:00:29 oauthproxy.go:774: basicauth: 10.129.0.88:37876 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:01:29 oauthproxy.go:774: basicauth: 10.129.0.88:38684 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:02:29 oauthproxy.go:774: basicauth: 10.129.0.88:39494 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:03:29 oauthproxy.go:774: basicauth: 10.129.0.88:40242 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:06:29 oauthproxy.go:774: basicauth: 10.129.0.88:42346 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:07:29 oauthproxy.go:774: basicauth: 10.129.0.88:43128 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:10:29 oauthproxy.go:774: basicauth: 10.129.0.88:45392 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:10:56 oauthproxy.go:774: basicauth: 10.128.0.71:59618 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:12:55 oauthproxy.go:774: basicauth: 10.128.0.71:51484 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 12 01:13:41.357 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-78nxcwr1-eabca-cqthj-worker-b-psq4s container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-12T01:13:38.610Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-12T01:13:38.615Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-12T01:13:38.616Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-12T01:13:38.617Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-12T01:13:38.617Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-12T01:13:38.617Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-12T01:13:38.617Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-12T01:13:38.617Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-12T01:13:38.617Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-12T01:13:38.617Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-12T01:13:38.617Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-12T01:13:38.617Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-12T01:13:38.617Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-12T01:13:38.617Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-12T01:13:38.629Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-12T01:13:38.629Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-12
Sep 12 01:13:46.807 E ns/openshift-cluster-samples-operator pod/cluster-samples-operator-6fdb4c7947-86b6v node/ci-op-78nxcwr1-eabca-cqthj-master-0 container/cluster-samples-operator container exited with code 2 (Error): 
Sep 12 01:13:53.078 E ns/e2e-k8s-sig-apps-job-upgrade-1544 pod/foo-9w4c7 node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/c container exited with code 137 (Error): 
Sep 12 01:14:08.224 E ns/e2e-k8s-service-lb-available-3976 pod/service-test-jm8sk node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/netexec container exited with code 2 (Error): 
Sep 12 01:14:46.810 E ns/openshift-sdn pod/sdn-6mqs6 node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/sdn container exited with code 255 (Error): ca-cqthj-master-1\nI0912 01:13:44.678338  121322 proxier.go:370] userspace proxy: processing 0 service events\nI0912 01:13:44.678854  121322 proxier.go:349] userspace syncProxyRules took 188.5993ms\nI0912 01:13:44.714639  121322 pod.go:541] CNI_DEL openshift-infra/recyler-pod-ci-op-78nxcwr1-eabca-cqthj-master-1\ninterrupt: Gracefully shutting down ...\nI0912 01:13:45.885293  121322 reflector.go:181] Stopping reflector *v1.Namespace (30s) from runtime/asm_amd64.s:1357\nI0912 01:13:45.885892  121322 reflector.go:181] Stopping reflector *v1.Pod (30s) from runtime/asm_amd64.s:1357\nI0912 01:13:45.885950  121322 reflector.go:181] Stopping reflector *v1.Endpoints (30s) from runtime/asm_amd64.s:1357\nI0912 01:13:45.886026  121322 reflector.go:181] Stopping reflector *v1.Service (30s) from runtime/asm_amd64.s:1357\nI0912 01:13:45.886075  121322 reflector.go:181] Stopping reflector *v1.NetNamespace (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:13:45.886114  121322 reflector.go:181] Stopping reflector *v1.NetworkPolicy (30s) from runtime/asm_amd64.s:1357\nI0912 01:13:45.886156  121322 reflector.go:181] Stopping reflector *v1.EgressNetworkPolicy (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:13:45.886191  121322 reflector.go:181] Stopping reflector *v1.HostSubnet (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:14:46.016424    2290 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0912 01:14:46.029425    2290 feature_gate.go:243] feature gates: &{map[]}\nI0912 01:14:46.029675    2290 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0912 01:14:46.029769    2290 cmd.go:216] Watching config file /config/..2020_09_12_01_01_09.515457826/kube-proxy-config.yaml for changes\nF0912 01:14:46.088007    2290 cmd.go:106] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: Get https://api-int.ci-op-78nxcwr1-eabca.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/clusternetworks/default: dial tcp 10.0.0.2:6443: connect: connection refused\n
Sep 12 01:14:49.207 E ns/openshift-sdn pod/sdn-6mqs6 node/ci-op-78nxcwr1-eabca-cqthj-master-1 container/sdn container exited with code 255 (Error): I0912 01:14:47.888664    3464 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0912 01:14:47.892207    3464 feature_gate.go:243] feature gates: &{map[]}\nI0912 01:14:47.892485    3464 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0912 01:14:47.892641    3464 cmd.go:216] Watching config file /config/..2020_09_12_01_01_09.515457826/kube-proxy-config.yaml for changes\nF0912 01:14:47.913865    3464 cmd.go:106] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: Get https://api-int.ci-op-78nxcwr1-eabca.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/clusternetworks/default: dial tcp 10.0.0.2:6443: connect: connection refused\n
Sep 12 01:15:31.694 E ns/openshift-cluster-machine-approver pod/machine-approver-588fb84cf8-rndqd node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/machine-approver-controller container exited with code 2 (Error): to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=23366&timeoutSeconds=499&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0912 00:57:34.094276       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=29705&timeoutSeconds=427&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0912 00:57:34.099033       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=23366&timeoutSeconds=506&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0912 00:57:35.095137       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=29705&timeoutSeconds=319&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0912 00:57:35.100545       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=23366&timeoutSeconds=346&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0912 00:57:41.820332       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: the server is currently unable to handle the request (get clusteroperators.config.openshift.io)\n
Sep 12 01:15:38.380 E ns/openshift-service-ca-operator pod/service-ca-operator-59465784c5-db2wb node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/operator container exited with code 1 (Error): 
Sep 12 01:15:57.647 E ns/openshift-console pod/console-fd5754d75-mq2xq node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/console container exited with code 2 (Error): 2020-09-12T01:10:55Z cmd/main: cookies are secure!\n2020-09-12T01:10:56Z cmd/main: Binding to [::]:8443...\n2020-09-12T01:10:56Z cmd/main: using TLS\n
Sep 12 01:17:09.735 E ns/openshift-sdn pod/sdn-2c4tb node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/sdn container exited with code 255 (Error): master-2 failed: pods "recyler-pod-ci-op-78nxcwr1-eabca-cqthj-master-2" not found\nI0912 01:16:08.369867  118298 pod.go:541] CNI_DEL openshift-infra/recyler-pod-ci-op-78nxcwr1-eabca-cqthj-master-2\nI0912 01:16:08.451021  118298 pod.go:541] CNI_DEL openshift-infra/recyler-pod-ci-op-78nxcwr1-eabca-cqthj-master-2\ninterrupt: Gracefully shutting down ...\nI0912 01:16:09.154624  118298 reflector.go:181] Stopping reflector *v1.Pod (30s) from runtime/asm_amd64.s:1357\nI0912 01:16:09.154710  118298 reflector.go:181] Stopping reflector *v1.Endpoints (30s) from runtime/asm_amd64.s:1357\nI0912 01:16:09.154833  118298 reflector.go:181] Stopping reflector *v1.NetworkPolicy (30s) from runtime/asm_amd64.s:1357\nI0912 01:16:09.154958  118298 reflector.go:181] Stopping reflector *v1.Service (30s) from runtime/asm_amd64.s:1357\nI0912 01:16:09.154995  118298 reflector.go:181] Stopping reflector *v1.Namespace (30s) from runtime/asm_amd64.s:1357\nI0912 01:16:09.155039  118298 reflector.go:181] Stopping reflector *v1.HostSubnet (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:16:09.155099  118298 reflector.go:181] Stopping reflector *v1.EgressNetworkPolicy (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:16:09.155180  118298 reflector.go:181] Stopping reflector *v1.NetNamespace (30m0s) from runtime/asm_amd64.s:1357\nI0912 01:17:08.911291    2519 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0912 01:17:08.914975    2519 feature_gate.go:243] feature gates: &{map[]}\nI0912 01:17:08.915025    2519 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0912 01:17:08.915074    2519 cmd.go:216] Watching config file /config/..2020_09_12_00_59_36.764172316/kube-proxy-config.yaml for changes\nF0912 01:17:08.965886    2519 cmd.go:106] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: Get https://api-int.ci-op-78nxcwr1-eabca.origin-ci-int-gce.dev.openshift.com:6443/apis/network.openshift.io/v1/clusternetworks/default: dial tcp 10.0.0.2:6443: connect: connection refused\n
Sep 12 01:17:18.427 E ns/openshift-sdn pod/sdn-2c4tb node/ci-op-78nxcwr1-eabca-cqthj-master-2 container/sdn container exited with code 255 (Error): I0912 01:17:10.865883    3611 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml\nI0912 01:17:10.869333    3611 feature_gate.go:243] feature gates: &{map[]}\nI0912 01:17:10.869457    3611 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes\nI0912 01:17:10.869527    3611 cmd.go:216] Watching config file /config/..2020_09_12_00_59_36.764172316/kube-proxy-config.yaml for changes\nF0912 01:17:17.954222    3611 cmd.go:106] Failed to initialize sdn: failed to initialize SDN: could not get ClusterNetwork resource: clusternetworks.network.openshift.io "default" is forbidden: User "system:serviceaccount:openshift-sdn:sdn" cannot get resource "clusternetworks" in API group "network.openshift.io" at the cluster scope\n
Sep 12 01:17:52.449 E ns/openshift-marketplace pod/certified-operators-7c8d5f6485-slnm8 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/certified-operators container exited with code 2 (Error): 
Sep 12 01:17:52.533 E ns/openshift-marketplace pod/community-operators-7c75fb577f-xbfd5 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/community-operators container exited with code 2 (Error): 
Sep 12 01:17:52.562 E ns/openshift-monitoring pod/prometheus-adapter-6658db64fb-sdqrp node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/prometheus-adapter container exited with code 2 (Error): I0912 00:56:26.366700       1 adapter.go:94] successfully using in-cluster auth\nI0912 00:56:36.083823       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0912 00:56:36.083839       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0912 00:56:36.084165       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0912 00:56:36.085594       1 secure_serving.go:178] Serving securely on [::]:6443\nI0912 00:56:36.086300       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nE0912 01:11:37.958786       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0912 01:11:37.959006       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\n
Sep 12 01:17:52.586 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-79785f56c7-flhg4 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/snapshot-controller container exited with code 2 (Error): 
Sep 12 01:17:52.686 E ns/openshift-monitoring pod/thanos-querier-5c555ff885-2wbcn node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/oauth-proxy container exited with code 2 (Error): 2020/09/12 00:56:26 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/12 00:56:26 http.go:107: HTTPS: listening on [::]:9091\nI0912 00:56:26.879951       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/12 00:57:29 oauthproxy.go:774: basicauth: 10.129.0.88:34832 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:04:29 oauthproxy.go:774: basicauth: 10.129.0.88:40970 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:05:29 oauthproxy.go:774: basicauth: 10.129.0.88:41678 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:08:29 oauthproxy.go:774: basicauth: 10.129.0.88:43936 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:09:29 oauthproxy.go:774: basicauth: 10.129.0.88:44674 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:12:00 oauthproxy.go:774: basicauth: 10.128.0.71:45496 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:13:18 oauthproxy.go:774: basicauth: 10.130.0.82:57484 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:14:18 oauthproxy.go:774: basicauth: 10.130.0.82:42502 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:15:43 oauthproxy.go:774: basicauth: 10.128.0.15:46778 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:15:43 oauthproxy.go:774: basicauth: 10.128.0.15:46778 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:16:51 oauthproxy.go:774: basicauth: 10.128.0.15:52398 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/12 01:16:51 oauthproxy.go:774: basicauth: 10.128.0.15:52398 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 12 01:17:53.487 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/config-reloader container exited with code 2 (Error): 2020/09/12 00:56:29 Watching directory: "/etc/alertmanager/config"\n
Sep 12 01:17:53.487 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/alertmanager-proxy container exited with code 2 (Error): 2020/09/12 00:56:29 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/12 00:56:29 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/12 00:56:29 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/12 00:56:29 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/12 00:56:29 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/12 00:56:29 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/12 00:56:29 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/12 00:56:29 http.go:107: HTTPS: listening on [::]:9095\nI0912 00:56:29.314125       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 12 01:17:53.560 E ns/openshift-monitoring pod/kube-state-metrics-67fc657d4c-vhwsj node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/kube-state-metrics container exited with code 2 (Error): 
Sep 12 01:17:53.585 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/config-reloader container exited with code 2 (Error): 2020/09/12 01:11:07 Watching directory: "/etc/alertmanager/config"\n
Sep 12 01:17:53.585 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-78nxcwr1-eabca-cqthj-worker-d-smwrb container/alertmanager-proxy container exited with code 2 (Error): 2020/09/12 01:11:07 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/12 01:11:07 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/12 01:11:07 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/12 01:11:07 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/12 01:11:07 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/12 01:11:07 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/12 01:11:07 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/12 01:11:07 http.go:107: HTTPS: listening on [::]:9095\nI0912 01:11:07.956158       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 12 01:18:09.479 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-78nxcwr1-eabca-cqthj-worker-c-whvfl container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-12T01:18:06.646Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-12T01:18:06.650Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-12T01:18:06.650Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-12T01:18:06.652Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-12T01:18:06.652Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-12T01:18:06.652Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-12T01:18:06.652Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-12T01:18:06.652Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-12T01:18:06.652Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-12T01:18:06.652Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-12T01:18:06.652Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-12T01:18:06.652Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-12T01:18:06.652Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-12T01:18:06.652Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-12T01:18:06.656Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-12T01:18:06.657Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-12