ResultSUCCESS
Tests 3 failed / 61 succeeded
Started2020-08-29 09:36
Elapsed1h24m
Work namespaceci-op-htssdq3h
Refs master:14213b6c
431:459d0053
pod1ce8cad3-e9db-11ea-a5e1-0a580a830b7f
repoopenshift/cluster-etcd-operator
revision1

Test Failures


Cluster upgrade [sig-api-machinery] OAuth APIs remain available 39m12s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-api\-machinery\]\sOAuth\sAPIs\sremain\savailable$'
API "oauth-api-available" was unreachable during disruption for at least 1s of 39m12s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Aug 29 10:41:27.553 I oauth-apiserver OAuth API stopped responding to GET requests: Get https://api.ci-op-htssdq3h-e4498.origin-ci-int-gce.dev.openshift.com:6443/apis/oauth.openshift.io/v1/oauthaccesstokens/missing?timeout=15s: dial tcp 35.229.100.58:6443: connect: connection refused
Aug 29 10:41:28.551 E oauth-apiserver OAuth API is not responding to GET requests
Aug 29 10:41:28.575 I oauth-apiserver OAuth API started responding to GET requests
				from junit_upgrade_1598698382.xml

Filter through log files


Cluster upgrade [sig-network-edge] Cluster frontend ingress remain available 39m12s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-network\-edge\]\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 3m18s of 39m11s (8%), this is currently sufficient to pass the test/job but not considered completely correct:

Aug 29 10:23:52.280 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 29 10:23:52.280 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 29 10:23:53.280 - 9s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 29 10:23:53.280 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 29 10:23:55.280 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 29 10:23:56.280 - 9s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 29 10:24:03.358 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 29 10:24:03.378 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 29 10:24:05.308 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 29 10:40:19.280 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 29 10:40:19.280 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 29 10:40:19.291 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 29 10:40:20.280 - 18s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 29 10:40:25.280 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 29 10:40:26.280 - 16s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 29 10:40:31.280 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 29 10:40:31.309 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 29 10:40:39.333 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 29 10:40:42.280 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 29 10:40:42.307 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 29 10:40:42.317 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 29 10:43:04.280 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 29 10:43:04.280 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 29 10:43:05.280 - 28s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 29 10:43:05.280 - 38s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 29 10:43:05.280 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 29 10:43:06.280 - 28s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 29 10:43:34.349 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 29 10:43:35.353 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 29 10:43:44.315 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 29 10:43:45.280 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 29 10:43:45.337 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 29 10:43:52.280 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 29 10:43:53.280 - 11s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 29 10:43:55.280 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 29 10:43:55.299 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 29 10:43:56.280 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 29 10:43:57.280 - 9s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 29 10:44:05.326 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 29 10:44:06.280 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 29 10:44:06.293 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 29 10:44:06.302 I ns/openshift-console route/console Route started responding to GET requests over new connections
				from junit_upgrade_1598698382.xml

Filter through log files


openshift-tests [sig-arch] Monitor cluster while tests execute 44m34s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-arch\]\sMonitor\scluster\swhile\stests\sexecute$'
72 error level events were detected during this test run:

Aug 29 10:22:49.359 E ns/openshift-machine-api pod/machine-api-controllers-6c756884dd-wglz9 node/ci-op-htssdq3h-e4498-cgrfj-master-0 container/machineset-controller container exited with code 1 (Error): 
Aug 29 10:22:53.076 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-6497d7ff7f-kl4zr node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/kube-storage-version-migrator-operator container exited with code 1 (Error): pected EOF\nI0829 10:07:23.768768       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0829 10:07:23.768775       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0829 10:07:23.768773       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0829 10:16:01.924921       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0829 10:16:01.925538       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0829 10:16:01.925898       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0829 10:16:01.968102       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0829 10:16:01.982444       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0829 10:16:01.982849       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0829 10:22:51.967487       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0829 10:22:51.967560       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0829 10:22:51.967588       1 builder.go:248] server exited\nI0829 10:22:51.967677       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0829 10:22:51.967773       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0829 10:22:51.967806       1 base_controller.go:58] Shutting down worker of StatusSyncer_kube-storage-version-migrator controller ...\nI0829 10:22:51.970724       1 base_controller.go:48] All StatusSyncer_kube-storage-version-migrator workers have been terminated\nW0829 10:22:51.967823       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
Aug 29 10:23:03.751 E ns/openshift-kube-storage-version-migrator pod/migrator-869fb68889-596gn node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/migrator container exited with code 2 (Error): I0829 10:01:28.141886       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0829 10:01:28.141996       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0829 10:01:28.142003       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0829 10:01:28.142011       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0829 10:01:28.142019       1 migrator.go:18] FLAG: --kubeconfig=""\nI0829 10:01:28.142025       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0829 10:01:28.142033       1 migrator.go:18] FLAG: --log_dir=""\nI0829 10:01:28.142039       1 migrator.go:18] FLAG: --log_file=""\nI0829 10:01:28.142044       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0829 10:01:28.142050       1 migrator.go:18] FLAG: --logtostderr="true"\nI0829 10:01:28.142055       1 migrator.go:18] FLAG: --skip_headers="false"\nI0829 10:01:28.142060       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0829 10:01:28.142065       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0829 10:01:28.142070       1 migrator.go:18] FLAG: --v="2"\nI0829 10:01:28.142076       1 migrator.go:18] FLAG: --vmodule=""\nI0829 10:01:28.144553       1 reflector.go:175] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\n
Aug 29 10:23:04.150 E ns/openshift-cluster-machine-approver pod/machine-approver-7584778645-q6kf7 node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/machine-approver-controller container exited with code 2 (Error): Version=22341&timeoutSeconds=387&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\nE0829 10:03:21.481634       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=27062&timeoutSeconds=389&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\nE0829 10:16:01.953360       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get "https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=27641&timeoutSeconds=323&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\nE0829 10:16:01.953874       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=38981&timeoutSeconds=478&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\nE0829 10:16:02.954140       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get "https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=27641&timeoutSeconds=317&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\nE0829 10:16:02.960674       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=38981&timeoutSeconds=507&watch=true": dial tcp 127.0.0.1:6443: connect: connection refused\n
Aug 29 10:23:26.796 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/config-reloader container exited with code 2 (Error): 2020/08/29 10:01:59 Watching directory: "/etc/alertmanager/config"\n
Aug 29 10:23:26.796 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/alertmanager-proxy container exited with code 2 (Error): 2020/08/29 10:01:59 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/29 10:01:59 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/29 10:01:59 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/29 10:01:59 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/29 10:01:59 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/08/29 10:01:59 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/29 10:01:59 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/08/29 10:01:59 http.go:107: HTTPS: listening on [::]:9095\nI0829 10:01:59.757925       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Aug 29 10:24:05.737 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-htssdq3h-e4498-cgrfj-worker-b-vxk7n container/prometheus container exited with code 2 (Error): level=error ts=2020-08-29T10:24:03.518Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Aug 29 10:24:07.928 E ns/openshift-monitoring pod/node-exporter-jzbdz node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-08-29T09:59:28.135Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-08-29T09:59:28.135Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Aug 29 10:24:11.066 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/prometheus-proxy container exited with code 2 (Error): 020/08/29 10:02:02 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/29 10:02:02 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/08/29 10:02:02 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/29 10:02:02 http.go:107: HTTPS: listening on [::]:9091\nI0829 10:02:02.954521       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/08/29 10:02:58 oauthproxy.go:785: basicauth: 10.128.2.3:39482 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:07:28 oauthproxy.go:785: basicauth: 10.128.2.3:43360 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:11:59 oauthproxy.go:785: basicauth: 10.128.2.3:47900 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:16:29 oauthproxy.go:785: basicauth: 10.128.2.3:52652 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:19:01 oauthproxy.go:785: basicauth: 10.130.0.25:58342 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:19:01 oauthproxy.go:785: basicauth: 10.130.0.25:58342 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:20:59 oauthproxy.go:785: basicauth: 10.128.2.3:57460 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:23:36 oauthproxy.go:785: basicauth: 10.128.2.31:54708 Authorization header does not start with 'Basic', skipping basic authentication\n202
Aug 29 10:24:11.066 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/rules-configmap-reloader container exited with code 2 (Error): 2020/08/29 10:01:59 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/08/29 10:03:07 config map updated\n2020/08/29 10:03:08 successfully triggered reload\n2020/08/29 10:09:27 config map updated\n2020/08/29 10:09:27 successfully triggered reload\n
Aug 29 10:24:11.066 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-29T10:01:59.413085765Z caller=main.go:87 msg="Starting prometheus-config-reloader version 'rhel-8-golang-openshift-4.6'."\nlevel=error ts=2020-08-29T10:01:59.415083534Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9090/-/reload\": dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-29T10:02:04.565340794Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-08-29T10:02:04.56553215Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-08-29T10:03:08.097223827Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-08-29T10:11:04.890976413Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 29 10:24:17.093 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/prometheus container exited with code 2 (Error): level=error ts=2020-08-29T10:24:14.785Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Aug 29 10:24:19.108 E ns/openshift-monitoring pod/node-exporter-2k5c4 node/ci-op-htssdq3h-e4498-cgrfj-master-1 container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-08-29T09:59:28.807Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-08-29T09:59:28.807Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-08-29T09:59:28.807Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-08-29T09:59:28.807Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-08-29T09:59:28.807Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-08-29T09:59:28.807Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-08-29T09:59:28.807Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-08-29T09:59:28.807Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-08-29T09:59:28.808Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-08-29T09:59:28.808Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Aug 29 10:24:35.848 E ns/openshift-monitoring pod/node-exporter-tqmrb node/ci-op-htssdq3h-e4498-cgrfj-worker-b-vxk7n container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-08-29T10:00:26.849Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-08-29T10:00:26.849Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Aug 29 10:24:47.343 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-fb98b79cd-tjffm node/ci-op-htssdq3h-e4498-cgrfj-master-1 container/snapshot-controller container exited with code 2 (Error): 
Aug 29 10:26:56.987 E ns/openshift-sdn pod/sdn-controller-xwt5l node/ci-op-htssdq3h-e4498-cgrfj-master-1 container/sdn-controller container exited with code 2 (Error): I0829 09:49:23.272410       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 29 10:27:02.940 E ns/openshift-sdn pod/sdn-controller-vgq9v node/ci-op-htssdq3h-e4498-cgrfj-master-0 container/sdn-controller container exited with code 2 (Error): I0829 09:49:17.022187       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0829 10:01:09.198829       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get "https://api-int.ci-op-htssdq3h-e4498.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": read tcp 10.0.0.5:47378->10.0.0.2:6443: read: connection timed out\n
Aug 29 10:27:11.058 E ns/openshift-sdn pod/ovs-k2x6p node/ci-op-htssdq3h-e4498-cgrfj-master-1 container/openvswitch container exited with code 137 (Error): 29T10:23:53.163Z|00452|bridge|INFO|bridge br0: added interface vethbf587d3d on port 73\n2020-08-29T10:23:53.399Z|00453|connmgr|INFO|br0<->unix#1111: 5 flow_mods in the last 0 s (5 adds)\n2020-08-29T10:23:53.975Z|00454|connmgr|INFO|br0<->unix#1114: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:24:13.687Z|00455|connmgr|INFO|br0<->unix#1120: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:24:13.736Z|00456|connmgr|INFO|br0<->unix#1123: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-29T10:24:13.801Z|00457|bridge|INFO|bridge br0: deleted interface vetha8e7af0e on port 51\n2020-08-29T10:24:32.103Z|00458|connmgr|INFO|br0<->unix#1129: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:24:32.214Z|00459|connmgr|INFO|br0<->unix#1132: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-29T10:24:32.267Z|00460|bridge|INFO|bridge br0: deleted interface veth001f08ec on port 18\n2020-08-29T10:24:40.949Z|00461|bridge|INFO|bridge br0: added interface veth4a02bf24 on port 74\n2020-08-29T10:24:40.991Z|00462|connmgr|INFO|br0<->unix#1135: 5 flow_mods in the last 0 s (5 adds)\n2020-08-29T10:24:41.060Z|00463|connmgr|INFO|br0<->unix#1138: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:24:46.564Z|00464|connmgr|INFO|br0<->unix#1141: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:24:46.599Z|00465|connmgr|INFO|br0<->unix#1144: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-29T10:24:46.638Z|00466|bridge|INFO|bridge br0: deleted interface veth19c52a6b on port 32\n2020-08-29T10:27:09.974Z|00467|connmgr|INFO|br0<->unix#1163: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:27:10.020Z|00468|connmgr|INFO|br0<->unix#1166: 4 flow_mods in the last 0 s (4 deletes)\n\n==> /host/var/log/openvswitch/ovsdb-server.log <==\n2020-08-29T10:27:10.032Z|00138|jsonrpc|WARN|unix#1277: send error: Broken pipe\n2020-08-29T10:27:10.032Z|00139|reconnect|WARN|unix#1277: connection dropped (Broken pipe)\n\n==> /host/var/log/openvswitch/ovs-vswitchd.log <==\n2020-08-29T10:27:10.063Z|00469|bridge|INFO|bridge br0: deleted interface vethbdb64156 on port 9\n
Aug 29 10:27:56.239 E ns/openshift-multus pod/multus-admission-controller-42rrn node/ci-op-htssdq3h-e4498-cgrfj-master-0 container/multus-admission-controller container exited with code 137 (Error): 
Aug 29 10:28:31.414 E ns/openshift-multus pod/multus-vnt4n node/ci-op-htssdq3h-e4498-cgrfj-master-0 container/kube-multus container exited with code 137 (Error): 
Aug 29 10:28:37.289 E ns/openshift-multus pod/multus-admission-controller-2p94z node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/multus-admission-controller container exited with code 137 (Error): 
Aug 29 10:29:10.272 E ns/openshift-multus pod/multus-dp7xh node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/kube-multus container exited with code 137 (Error): 
Aug 29 10:29:11.451 E ns/openshift-sdn pod/ovs-5hkxn node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/openvswitch container exited with code 137 (Error): >unix#1170: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-29T10:28:15.566Z|00457|connmgr|INFO|br0<->unix#1172: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:28:15.626Z|00458|connmgr|INFO|br0<->unix#1176: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-29T10:28:15.629Z|00459|connmgr|INFO|br0<->unix#1178: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:28:15.668Z|00460|connmgr|INFO|br0<->unix#1182: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:28:15.673Z|00461|connmgr|INFO|br0<->unix#1184: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-29T10:28:15.711Z|00462|connmgr|INFO|br0<->unix#1187: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:28:15.744Z|00463|connmgr|INFO|br0<->unix#1190: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:28:15.783Z|00464|connmgr|INFO|br0<->unix#1193: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:28:15.833Z|00465|connmgr|INFO|br0<->unix#1196: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:28:15.864Z|00466|connmgr|INFO|br0<->unix#1199: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:28:15.893Z|00467|connmgr|INFO|br0<->unix#1202: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:28:15.922Z|00468|connmgr|INFO|br0<->unix#1205: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:28:36.735Z|00469|connmgr|INFO|br0<->unix#1208: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:28:36.810Z|00470|connmgr|INFO|br0<->unix#1211: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-29T10:28:36.850Z|00471|bridge|INFO|bridge br0: deleted interface veth7cee170c on port 14\n2020-08-29T10:28:44.316Z|00472|bridge|INFO|bridge br0: added interface veth97809bed on port 77\n2020-08-29T10:28:44.367Z|00473|connmgr|INFO|br0<->unix#1214: 5 flow_mods in the last 0 s (5 adds)\n2020-08-29T10:28:44.416Z|00474|connmgr|INFO|br0<->unix#1217: 2 flow_mods in the last 0 s (2 deletes)\n\n==> /host/var/log/openvswitch/ovsdb-server.log <==\n2020-08-29T10:28:45.395Z|00110|jsonrpc|WARN|unix#1287: send error: Broken pipe\n2020-08-29T10:28:45.395Z|00111|reconnect|WARN|unix#1287: connection dropped (Broken pipe)\n
Aug 29 10:29:59.879 E ns/openshift-sdn pod/ovs-4fh7m node/ci-op-htssdq3h-e4498-cgrfj-master-0 container/openvswitch container exited with code 137 (Error): )\n2020-08-29T10:26:45.537Z|00443|connmgr|INFO|br0<->unix#1135: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:26:45.574Z|00444|connmgr|INFO|br0<->unix#1138: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:26:45.618Z|00445|connmgr|INFO|br0<->unix#1141: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:26:45.660Z|00446|connmgr|INFO|br0<->unix#1144: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:26:45.701Z|00447|connmgr|INFO|br0<->unix#1147: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:26:45.742Z|00448|connmgr|INFO|br0<->unix#1150: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:11.896Z|00449|connmgr|INFO|br0<->unix#1153: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:27:11.944Z|00450|connmgr|INFO|br0<->unix#1156: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-29T10:27:11.992Z|00451|bridge|INFO|bridge br0: deleted interface veth4c5d477c on port 10\n2020-08-29T10:27:14.555Z|00452|bridge|INFO|bridge br0: added interface vethb6c93caf on port 73\n2020-08-29T10:27:14.594Z|00453|connmgr|INFO|br0<->unix#1159: 5 flow_mods in the last 0 s (5 adds)\n2020-08-29T10:27:14.647Z|00454|connmgr|INFO|br0<->unix#1162: 2 flow_mods in the last 0 s (2 deletes)\n\n==> /host/var/log/openvswitch/ovsdb-server.log <==\n2020-08-29T10:27:31.524Z|00132|jsonrpc|WARN|unix#1243: send error: Broken pipe\n2020-08-29T10:27:31.525Z|00133|reconnect|WARN|unix#1243: connection dropped (Broken pipe)\n\n==> /host/var/log/openvswitch/ovs-vswitchd.log <==\n2020-08-29T10:27:55.675Z|00455|connmgr|INFO|br0<->unix#1171: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:27:55.710Z|00456|connmgr|INFO|br0<->unix#1174: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-29T10:27:55.750Z|00457|bridge|INFO|bridge br0: deleted interface vethf808039d on port 5\n2020-08-29T10:28:02.553Z|00458|bridge|INFO|bridge br0: added interface vethd22bf8a4 on port 74\n2020-08-29T10:28:02.594Z|00459|connmgr|INFO|br0<->unix#1177: 5 flow_mods in the last 0 s (5 adds)\n2020-08-29T10:28:02.650Z|00460|connmgr|INFO|br0<->unix#1180: 2 flow_mods in the last 0 s (2 deletes)\n
Aug 29 10:30:31.060 E ns/openshift-multus pod/multus-5b24l node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/kube-multus container exited with code 137 (Error): 
Aug 29 10:30:59.158 E ns/openshift-sdn pod/ovs-h75z9 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/openvswitch container exited with code 137 (Error): -08-29T10:26:57.635Z|00148|connmgr|INFO|br0<->unix#435: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-29T10:26:57.666Z|00149|bridge|INFO|bridge br0: deleted interface veth54c39482 on port 4\n2020-08-29T10:27:08.271Z|00150|bridge|INFO|bridge br0: added interface veth404c2b1c on port 23\n2020-08-29T10:27:08.323Z|00151|connmgr|INFO|br0<->unix#441: 5 flow_mods in the last 0 s (5 adds)\n2020-08-29T10:27:08.387Z|00152|connmgr|INFO|br0<->unix#444: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:27:39.391Z|00153|connmgr|INFO|br0<->unix#450: 2 flow_mods in the last 0 s (2 adds)\n2020-08-29T10:27:39.458Z|00154|connmgr|INFO|br0<->unix#454: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:39.555Z|00155|connmgr|INFO|br0<->unix#466: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-29T10:27:39.582Z|00156|connmgr|INFO|br0<->unix#470: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-29T10:27:39.616Z|00157|connmgr|INFO|br0<->unix#473: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-29T10:27:39.647Z|00158|connmgr|INFO|br0<->unix#476: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-29T10:27:39.750Z|00159|connmgr|INFO|br0<->unix#479: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:27:39.779Z|00160|connmgr|INFO|br0<->unix#482: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:39.806Z|00161|connmgr|INFO|br0<->unix#485: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:27:39.844Z|00162|connmgr|INFO|br0<->unix#488: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:39.876Z|00163|connmgr|INFO|br0<->unix#491: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:27:39.903Z|00164|connmgr|INFO|br0<->unix#494: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:39.934Z|00165|connmgr|INFO|br0<->unix#497: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:27:39.966Z|00166|connmgr|INFO|br0<->unix#500: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:39.998Z|00167|connmgr|INFO|br0<->unix#503: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:27:40.036Z|00168|connmgr|INFO|br0<->unix#506: 1 flow_mods in the last 0 s (1 adds)\n
Aug 29 10:31:49.714 E ns/openshift-sdn pod/ovs-n6zpk node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/openvswitch container exited with code 137 (Error): 08-29T10:27:08.761Z|00210|connmgr|INFO|br0<->unix#561: 2 flow_mods in the last 0 s (2 adds)\n2020-08-29T10:27:08.826Z|00211|connmgr|INFO|br0<->unix#565: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:08.896Z|00212|connmgr|INFO|br0<->unix#575: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-29T10:27:08.931Z|00213|connmgr|INFO|br0<->unix#581: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-29T10:27:08.962Z|00214|connmgr|INFO|br0<->unix#584: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-29T10:27:09.072Z|00215|connmgr|INFO|br0<->unix#587: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:27:09.114Z|00216|connmgr|INFO|br0<->unix#590: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:09.146Z|00217|connmgr|INFO|br0<->unix#593: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:27:09.169Z|00218|connmgr|INFO|br0<->unix#596: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:09.199Z|00219|connmgr|INFO|br0<->unix#599: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:27:09.225Z|00220|connmgr|INFO|br0<->unix#602: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:09.256Z|00221|connmgr|INFO|br0<->unix#605: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:27:09.290Z|00222|connmgr|INFO|br0<->unix#608: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:09.320Z|00223|connmgr|INFO|br0<->unix#611: 3 flow_mods in the last 0 s (3 adds)\n2020-08-29T10:27:09.344Z|00224|connmgr|INFO|br0<->unix#614: 1 flow_mods in the last 0 s (1 adds)\n2020-08-29T10:27:34.555Z|00225|connmgr|INFO|br0<->unix#617: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-29T10:27:34.586Z|00226|connmgr|INFO|br0<->unix#620: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-29T10:27:34.616Z|00227|bridge|INFO|bridge br0: deleted interface veth506b0f13 on port 13\n2020-08-29T10:27:36.450Z|00228|bridge|INFO|bridge br0: added interface veth0f96af21 on port 38\n2020-08-29T10:27:36.486Z|00229|connmgr|INFO|br0<->unix#623: 5 flow_mods in the last 0 s (5 adds)\n2020-08-29T10:27:36.542Z|00230|connmgr|INFO|br0<->unix#626: 2 flow_mods in the last 0 s (2 deletes)\n
Aug 29 10:34:28.877 E ns/openshift-machine-config-operator pod/machine-config-daemon-kdvnz node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/oauth-proxy container exited with code 143 (Error): 
Aug 29 10:34:33.670 E ns/openshift-machine-config-operator pod/machine-config-daemon-xxmm7 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/oauth-proxy container exited with code 143 (Error): 
Aug 29 10:34:40.292 E ns/openshift-machine-config-operator pod/machine-config-daemon-cqd2c node/ci-op-htssdq3h-e4498-cgrfj-master-1 container/oauth-proxy container exited with code 143 (Error): 
Aug 29 10:34:54.402 E ns/openshift-machine-config-operator pod/machine-config-daemon-kb5xj node/ci-op-htssdq3h-e4498-cgrfj-master-0 container/oauth-proxy container exited with code 143 (Error): 
Aug 29 10:35:15.295 E ns/openshift-machine-config-operator pod/machine-config-daemon-p5dvj node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/oauth-proxy container exited with code 143 (Error): 
Aug 29 10:35:33.570 E ns/openshift-machine-config-operator pod/machine-config-controller-5dc67944d9-6gp8z node/ci-op-htssdq3h-e4498-cgrfj-master-1 container/machine-config-controller container exited with code 2 (Error): ion machineconfiguration.openshift.io/currentConfig = rendered-worker-7a8a4b10853a3a8248d89a5373ed2f8a\nI0829 10:00:36.174627       1 node_controller.go:419] Pool worker: node ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-7a8a4b10853a3a8248d89a5373ed2f8a\nI0829 10:00:36.174633       1 node_controller.go:419] Pool worker: node ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr: changed annotation machineconfiguration.openshift.io/state = Done\nE0829 10:00:41.254050       1 render_controller.go:459] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again\nI0829 10:00:41.254084       1 render_controller.go:376] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again\nI0829 10:00:47.505812       1 node_controller.go:419] Pool worker: node ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb: Reporting ready\nI0829 10:00:48.370272       1 node_controller.go:419] Pool worker: node ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr: Reporting ready\nI0829 10:00:53.165070       1 node_controller.go:419] Pool worker: node ci-op-htssdq3h-e4498-cgrfj-worker-b-vxk7n: Reporting ready\nE0829 10:00:57.613215       1 render_controller.go:459] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again\nI0829 10:00:57.613244       1 render_controller.go:376] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again\n
Aug 29 10:37:33.005 E ns/openshift-machine-config-operator pod/machine-config-server-cwr68 node/ci-op-htssdq3h-e4498-cgrfj-master-1 container/machine-config-server container exited with code 2 (Error): I0829 09:50:46.405907       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-226-g36f37f2d-dirty (36f37f2d6009affe8174854f5ef5538e0cc49034)\nI0829 09:50:46.406575       1 api.go:71] Launching server on :22624\nI0829 09:50:46.406622       1 api.go:71] Launching server on :22623\nI0829 09:58:13.351839       1 api.go:119] Pool worker requested by address:"10.0.32.3:60298" User-Agent:"Ignition/2.6.0" Accept-Header: "application/vnd.coreos.ignition+json;version=3.1.0, */*;q=0.1"\n
Aug 29 10:37:43.336 E ns/openshift-machine-config-operator pod/machine-config-server-vfdsh node/ci-op-htssdq3h-e4498-cgrfj-master-0 container/machine-config-server container exited with code 2 (Error): I0829 09:50:46.366980       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-226-g36f37f2d-dirty (36f37f2d6009affe8174854f5ef5538e0cc49034)\nI0829 09:50:46.367652       1 api.go:71] Launching server on :22624\nI0829 09:50:46.367697       1 api.go:71] Launching server on :22623\n
Aug 29 10:37:44.448 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-htssdq3h-e4498-cgrfj-worker-b-vxk7n container/config-reloader container exited with code 2 (Error): 2020/08/29 10:24:19 Watching directory: "/etc/alertmanager/config"\n
Aug 29 10:37:44.448 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-htssdq3h-e4498-cgrfj-worker-b-vxk7n container/alertmanager-proxy container exited with code 2 (Error): 2020/08/29 10:24:19 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/29 10:24:19 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/29 10:24:19 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/29 10:24:19 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/29 10:24:19 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/08/29 10:24:19 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/29 10:24:19 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\nI0829 10:24:19.373568       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/08/29 10:24:19 http.go:107: HTTPS: listening on [::]:9095\n
Aug 29 10:37:45.671 E ns/openshift-console-operator pod/console-operator-b94fd94c6-nqhbj node/ci-op-htssdq3h-e4498-cgrfj-master-0 container/console-operator container exited with code 1 (Error): shift/client-go/config/informers/externalversions/factory.go:101\nI0829 10:37:42.919215       1 reflector.go:213] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206\nI0829 10:37:42.919243       1 reflector.go:213] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader_controller.go:172\nI0829 10:37:42.918202       1 reflector.go:213] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0829 10:37:42.918270       1 reflector.go:213] Stopping reflector *v1.OAuth (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0829 10:37:42.918314       1 reflector.go:213] Stopping reflector *v1.ConsoleCLIDownload (10m0s) from github.com/openshift/client-go/console/informers/externalversions/factory.go:101\nI0829 10:37:42.918368       1 reflector.go:213] Stopping reflector *v1.Console (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0829 10:37:42.918399       1 reflector.go:213] Stopping reflector *v1.OAuthClient (10m0s) from github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101\nI0829 10:37:42.918429       1 reflector.go:213] Stopping reflector *v1.Route (10m0s) from github.com/openshift/client-go/route/informers/externalversions/factory.go:101\nI0829 10:37:42.918455       1 base_controller.go:136] Shutting down UnsupportedConfigOverridesController ...\nI0829 10:37:42.918486       1 base_controller.go:83] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0829 10:37:42.919369       1 base_controller.go:73] All UnsupportedConfigOverridesController workers have been terminated\nI0829 10:37:42.918573       1 secure_serving.go:241] Stopped listening on [::]:8443\nI0829 10:37:42.918599       1 builder.go:263] server exited\nW0829 10:37:42.918635       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Aug 29 10:38:02.822 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/prometheus container exited with code 2 (Error): level=error ts=2020-08-29T10:37:59.746Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Aug 29 10:39:29.030 E clusteroperator/dns changed Degraded to True: DNSDegraded: DNS default is degraded
Aug 29 10:39:30.509 E ns/openshift-dns pod/dns-default-wclbv node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/dns-node-resolver container exited with code 1 (Error): /bin/bash: line 51: cmp: command not found\n/bin/bash: line 51: cmp: command not found\n/bin/bash: line 51: cmp: command not found\n/bin/bash: line 51: cmp: command not found\n/bin/bash: line 51: cmp: command not found\n/bin/bash: line 38: svc_ips: unbound variable\n
Aug 29 10:39:52.107 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Aug 29 10:39:52.120 E clusteroperator/authentication changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver
Aug 29 10:40:13.235 E ns/openshift-marketplace pod/redhat-operators-2j86x node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/registry-server container exited with code 2 (Error): 
Aug 29 10:40:13.402 E ns/openshift-monitoring pod/grafana-566c96fbdf-hrrnv node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/grafana-proxy container exited with code 2 (Error): 
Aug 29 10:40:13.429 E ns/openshift-monitoring pod/kube-state-metrics-6bf5c88d7b-2wlr2 node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/kube-state-metrics container exited with code 2 (Error): 
Aug 29 10:40:13.480 E ns/openshift-marketplace pod/certified-operators-lbzpw node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/registry-server container exited with code 2 (Error): 
Aug 29 10:40:14.475 E ns/openshift-monitoring pod/openshift-state-metrics-5f9ff49595-gp7xl node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/openshift-state-metrics container exited with code 2 (Error): 
Aug 29 10:40:15.383 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/rules-configmap-reloader container exited with code 2 (Error): 2020/08/29 10:38:01 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 29 10:40:15.383 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/prometheus-proxy container exited with code 2 (Error): 2020/08/29 10:38:01 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/29 10:38:01 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/29 10:38:01 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/29 10:38:01 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/29 10:38:01 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/08/29 10:38:01 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/29 10:38:01 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/08/29 10:38:01 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/29 10:38:01 http.go:107: HTTPS: listening on [::]:9091\nI0829 10:38:01.603662       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Aug 29 10:40:15.383 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-29T10:38:00.771891487Z caller=main.go:87 msg="Starting prometheus-config-reloader version 'rhel-8-golang-openshift-4.6'."\nlevel=error ts=2020-08-29T10:38:00.774134581Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9090/-/reload\": dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-29T10:38:05.991191504Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-08-29T10:38:05.991275516Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Aug 29 10:40:15.439 E ns/openshift-monitoring pod/prometheus-adapter-57577cdc68-d5glh node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/prometheus-adapter container exited with code 2 (Error): I0829 10:23:39.506933       1 adapter.go:94] successfully using in-cluster auth\nI0829 10:23:50.800806       1 secure_serving.go:178] Serving securely on [::]:6443\nI0829 10:23:50.800889       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0829 10:23:50.800931       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0829 10:23:50.801212       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0829 10:23:50.801355       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Aug 29 10:40:27.508 E ns/openshift-machine-api pod/machine-api-operator-746466f67f-czs6h node/ci-op-htssdq3h-e4498-cgrfj-master-1 container/machine-api-operator container exited with code 2 (Error): 
Aug 29 10:41:15.935 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-htssdq3h-e4498-cgrfj-worker-b-vxk7n container/prometheus container exited with code 2 (Error): level=error ts=2020-08-29T10:40:34.795Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Aug 29 10:42:36.635 E clusteroperator/dns changed Degraded to True: DNSDegraded: DNS default is degraded
Aug 29 10:42:55.410 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/rules-configmap-reloader container exited with code 2 (Error): 2020/08/29 10:24:15 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 29 10:42:55.410 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/prometheus-proxy container exited with code 2 (Error): 313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/29 10:24:16 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/29 10:24:16 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2020/08/29 10:24:16 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/29 10:24:16 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2020/08/29 10:24:16 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0829 10:24:16.168163       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/08/29 10:24:16 http.go:107: HTTPS: listening on [::]:9091\n2020/08/29 10:24:36 oauthproxy.go:785: basicauth: 10.128.2.31:56238 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:25:36 oauthproxy.go:785: basicauth: 10.128.2.31:57280 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:30:06 oauthproxy.go:785: basicauth: 10.128.2.31:33822 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:34:37 oauthproxy.go:785: basicauth: 10.128.2.31:38474 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:39:17 oauthproxy.go:785: basicauth: 10.128.2.31:44164 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/29 10:40:20 oauthproxy.go:785: basicauth: 10.131.0.7:38072 Authorization header does not start with 'Basic', skipping basic authentication\n202
Aug 29 10:42:55.410 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-29T10:24:15.424774302Z caller=main.go:87 msg="Starting prometheus-config-reloader version 'rhel-8-golang-openshift-4.6'."\nlevel=error ts=2020-08-29T10:24:15.426596785Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9090/-/reload\": dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-29T10:24:20.657399984Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-08-29T10:24:20.658214876Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Aug 29 10:43:13.389 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-6dbcbb65dc-mg9s9 node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/operator container exited with code 1 (Error): 29 10:43:03.965798371 +0000 UTC m=+1171.084320482\nI0829 10:43:03.969461       1 status_controller.go:172] clusteroperator/csi-snapshot-controller diff {"status":{"conditions":[{"lastTransitionTime":"2020-08-29T09:50:02Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-08-29T10:43:03Z","message":"Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods","reason":"_AsExpected","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-08-29T10:43:03Z","message":"Available: Waiting for Deployment to deploy csi-snapshot-controller pods","reason":"_AsExpected","status":"False","type":"Available"},{"lastTransitionTime":"2020-08-29T09:50:05Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0829 10:43:04.059287       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-storage-operator", Name:"csi-snapshot-controller-operator", UID:"d2d1a7f5-4414-46ec-930f-5a2dda7549b3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods"),Available changed from True to False ("Available: Waiting for Deployment to deploy csi-snapshot-controller pods")\nI0829 10:43:04.284512       1 operator.go:148] Finished syncing operator at 318.701054ms\nI0829 10:43:04.284644       1 operator.go:146] Starting syncing operator at 2020-08-29 10:43:04.28463786 +0000 UTC m=+1171.403159984\nI0829 10:43:04.480604       1 operator.go:148] Finished syncing operator at 195.957227ms\nI0829 10:43:05.384755       1 operator.go:146] Starting syncing operator at 2020-08-29 10:43:05.384742317 +0000 UTC m=+1172.503264431\nI0829 10:43:05.462460       1 cmd.go:88] Received SIGTERM or SIGINT signal, shutting down controller.\nW0829 10:43:05.463636       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Aug 29 10:43:14.574 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-htssdq3h-e4498-cgrfj-worker-c-44bkb container/prometheus container exited with code 2 (Error): level=error ts=2020-08-29T10:43:12.222Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Aug 29 10:43:15.275 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-c6dc8c798-wqtlw node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/kube-storage-version-migrator-operator container exited with code 1 (Error): onTime":"2020-08-29T10:22:58Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-08-29T10:42:58Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-08-29T09:50:02Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0829 10:42:58.691589       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"79f9e487-c1bc-4244-be20-141c223d999a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0829 10:43:06.540355       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0829 10:43:06.540723       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0829 10:43:06.540775       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0829 10:43:06.540847       1 reflector.go:181] Stopping reflector *unstructured.Unstructured (12h0m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0829 10:43:06.540878       1 reflector.go:181] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0829 10:43:06.540924       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.2/tools/cache/reflector.go:125\nI0829 10:43:06.540942       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0829 10:43:06.540954       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nI0829 10:43:06.540966       1 base_controller.go:101] Shutting down LoggingSyncer ...\nW0829 10:43:06.547238       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
Aug 29 10:43:15.336 E ns/openshift-console-operator pod/console-operator-b94fd94c6-x9qhz node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/console-operator container exited with code 1 (Error): 0:43:06.304949       1 controller.go:70] Shutting down Console\nI0829 10:43:06.310622       1 controller.go:115] shutting down ConsoleResourceSyncDestinationController\nI0829 10:43:06.310701       1 base_controller.go:136] Shutting down ManagementStateController ...\nI0829 10:43:06.310751       1 controller.go:349] shutting down ConsoleRouteSyncController\nI0829 10:43:06.310811       1 base_controller.go:136] Shutting down UnsupportedConfigOverridesController ...\nI0829 10:43:06.313547       1 base_controller.go:136] Shutting down LoggingSyncer ...\nI0829 10:43:06.313616       1 controller.go:181] shutting down ConsoleServiceSyncController\nI0829 10:43:06.315218       1 reflector.go:213] Stopping reflector *v1.OAuth (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0829 10:43:06.315316       1 reflector.go:213] Stopping reflector *v1.Console (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0829 10:43:06.315420       1 reflector.go:213] Stopping reflector *v1.Proxy (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0829 10:43:06.315459       1 base_controller.go:83] Shutting down worker of LoggingSyncer controller ...\nI0829 10:43:06.315499       1 base_controller.go:73] All LoggingSyncer workers have been terminated\nI0829 10:43:06.315541       1 reflector.go:213] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0829 10:43:06.315585       1 reflector.go:213] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0829 10:43:06.315606       1 reflector.go:213] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0829 10:43:06.317455       1 reflector.go:213] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206\nW0829 10:43:06.317746       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Aug 29 10:43:15.507 E ns/openshift-machine-api pod/machine-api-controllers-668fb4b97-44hh6 node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/machineset-controller container exited with code 1 (Error): 
Aug 29 10:43:16.839 E ns/openshift-insights pod/insights-operator-6bb64cfb64-z95rp node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/operator container exited with code 2 (Error): Recording config/proxy with fingerprint=\nI0829 10:41:01.760471       1 diskrecorder.go:179] Writing 86 records to /var/lib/insights-operator/insights-2020-08-29-104101.tar.gz\nI0829 10:41:01.804609       1 diskrecorder.go:143] Wrote 86 records to disk in 44ms\nI0829 10:41:01.804644       1 periodic.go:151] Periodic gather config completed in 4.213s\nI0829 10:41:08.521150       1 diskrecorder.go:312] Found files to send: [/var/lib/insights-operator/insights-2020-08-29-104101.tar.gz]\nI0829 10:41:08.521216       1 insightsuploader.go:131] Uploading latest report since 2020-08-29T10:24:00Z\nI0829 10:41:08.530203       1 insightsclient.go:164] Uploading application/vnd.redhat.openshift.periodic to https://cloud.redhat.com/api/ingress/v1/upload\nI0829 10:41:08.950659       1 insightsclient.go:213] Successfully reported id=2020-08-29T10:41:08Z x-rh-insights-request-id=f4b77015546e4b61b0194bef9b4d9a39, wrote=71014\nI0829 10:41:08.950702       1 insightsuploader.go:159] Uploaded report successfully in 429.484968ms\nI0829 10:41:08.956566       1 status.go:320] The operator is healthy\nI0829 10:41:27.180670       1 httplog.go:90] GET /metrics: (10.171723ms) 200 [Prometheus/2.20.0 10.131.0.21:58686]\nI0829 10:41:27.591391       1 httplog.go:90] GET /metrics: (78.143974ms) 200 [Prometheus/2.20.0 10.129.2.21:51666]\nI0829 10:41:45.843883       1 status.go:320] The operator is healthy\nI0829 10:41:45.843946       1 status.go:430] No status update necessary, objects are identical\nI0829 10:41:57.167922       1 httplog.go:90] GET /metrics: (8.36057ms) 200 [Prometheus/2.20.0 10.131.0.21:58686]\nI0829 10:41:57.509308       1 httplog.go:90] GET /metrics: (2.038839ms) 200 [Prometheus/2.20.0 10.129.2.21:51666]\nI0829 10:42:27.166462       1 httplog.go:90] GET /metrics: (6.823715ms) 200 [Prometheus/2.20.0 10.131.0.21:58686]\nI0829 10:42:27.509356       1 httplog.go:90] GET /metrics: (2.040177ms) 200 [Prometheus/2.20.0 10.129.2.21:51666]\nI0829 10:42:57.170849       1 httplog.go:90] GET /metrics: (11.321405ms) 200 [Prometheus/2.20.0 10.131.0.21:58686]\n
Aug 29 10:43:16.947 E ns/openshift-service-ca-operator pod/service-ca-operator-54bcf69457-w9gsg node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/operator container exited with code 1 (Error): 
Aug 29 10:43:17.257 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-74bc967587-v29t4 node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/operator container exited with code 1 (Error): /namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0829 10:42:38.559947       1 reflector.go:515] k8s.io/client-go@v0.19.0-rc.3/tools/cache/reflector.go:156: Watch close - *v1.Secret total 7 items received\nI0829 10:42:47.782251       1 request.go:581] Throttling request took 161.630532ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0829 10:42:47.982119       1 request.go:581] Throttling request took 189.010597ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0829 10:42:57.817162       1 request.go:581] Throttling request took 105.8879ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0829 10:42:58.014485       1 request.go:581] Throttling request took 192.227872ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0829 10:42:58.198499       1 httplog.go:89] "HTTP" verb="GET" URI="/metrics" latency="60.185662ms" userAgent="Prometheus/2.20.0" srcIP="10.131.0.21:47276" resp=200\nI0829 10:43:02.588634       1 reflector.go:515] k8s.io/client-go@v0.19.0-rc.3/tools/cache/reflector.go:156: Watch close - *v1.Secret total 6 items received\nI0829 10:43:07.594285       1 reflector.go:515] k8s.io/client-go@v0.19.0-rc.3/tools/cache/reflector.go:156: Watch close - *v1.ConfigMap total 10 items received\nI0829 10:43:10.604148       1 cmd.go:88] Received SIGTERM or SIGINT signal, shutting down controller.\nI0829 10:43:10.604755       1 reflector.go:213] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.19.0-rc.3/tools/cache/reflector.go:156\nI0829 10:43:10.608954       1 operator.go:141] Shutting down OpenShiftControllerManagerOperator\nW0829 10:43:10.608675       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Aug 29 10:43:18.083 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-69f7fd9d64-wlr9p node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/cluster-storage-operator container exited with code 1 (Error): .go:213] Stopping reflector *v1.ClusterRoleBinding (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0829 10:43:13.108150       1 reflector.go:213] Stopping reflector *v1.ClusterRole (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0829 10:43:13.108163       1 reflector.go:213] Stopping reflector *v1.Infrastructure (20m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0829 10:43:13.108178       1 reflector.go:213] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0829 10:43:13.108193       1 reflector.go:213] Stopping reflector *v1.StorageClass (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0829 10:43:13.108209       1 reflector.go:213] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0829 10:43:13.108224       1 reflector.go:213] Stopping reflector *v1.ClusterCSIDriver (20m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0829 10:43:13.108239       1 reflector.go:213] Stopping reflector *v1.Namespace (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0829 10:43:13.108258       1 reflector.go:213] Stopping reflector *v1.Storage (20m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0829 10:43:13.108272       1 reflector.go:213] Stopping reflector *v1.ServiceAccount (10m0s) from k8s.io/client-go@v0.19.0-rc.2/tools/cache/reflector.go:156\nI0829 10:43:13.108304       1 base_controller.go:136] Shutting down SnapshotCRDController ...\nI0829 10:43:13.108326       1 base_controller.go:83] Shutting down worker of SnapshotCRDController controller ...\nI0829 10:43:13.112917       1 base_controller.go:73] All SnapshotCRDController workers have been terminated\nW0829 10:43:13.108464       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\nI0829 10:43:13.108475       1 base_controller.go:136] Shutting down DefaultStorageClassController ...\n
Aug 29 10:43:19.266 E ns/openshift-service-ca pod/service-ca-6c54c9774-z4pxb node/ci-op-htssdq3h-e4498-cgrfj-master-2 container/service-ca-controller container exited with code 1 (Error): 
Aug 29 10:43:24.512 E ns/e2e-k8s-sig-apps-job-upgrade-7507 pod/foo-285vd node/ci-op-htssdq3h-e4498-cgrfj-worker-d-5zdbr container/c container exited with code 137 (Error): 
Aug 29 10:44:18.105 E kube-apiserver failed contacting the API: Get https://api.ci-op-htssdq3h-e4498.origin-ci-int-gce.dev.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=74723&timeout=6m52s&timeoutSeconds=412&watch=true: dial tcp 35.229.100.58:6443: connect: connection refused
Aug 29 10:44:57.676 E clusteroperator/dns changed Degraded to True: DNSDegraded: DNS default is degraded
Aug 29 10:46:18.204 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver