ResultSUCCESS
Tests 1 failed / 155 succeeded
Started2021-04-10 04:09
Elapsed1h46m
Work namespaceci-op-vt6vgfwv
Refs openshift-4.7:cca97c76
74:6e275211
pod8e1c0e8a-99b2-11eb-8a8b-0a580a82032d
repoopenshift/etcd
revision1

Test Failures


openshift-tests [sig-arch] Monitor cluster while tests execute 51m28s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-arch\]\sMonitor\scluster\swhile\stests\sexecute$'
37 error level events were detected during this test run:

Apr 10 05:00:41.429 E ns/openshift-ingress-canary pod/ingress-canary-r6hhd node/ip-10-0-170-76.us-east-2.compute.internal container/hello-openshift-canary container exited with code 2 (Error): serving on 8888\nserving on 8080\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\n
Apr 10 05:01:11.517 E ns/openshift-ingress-canary pod/ingress-canary-8ggtt node/ip-10-0-170-76.us-east-2.compute.internal container/hello-openshift-canary container exited with code 2 (Error): serving on 8888\nserving on 8080\n
Apr 10 05:02:33.753 E ns/e2e-daemonsets-4636 pod/daemon-set-vgnb2 node/ip-10-0-252-66.us-east-2.compute.internal container/app container exited with code 2 (Error): 
Apr 10 05:06:39.486 E ns/openshift-ingress-canary pod/ingress-canary-64zwh node/ip-10-0-170-76.us-east-2.compute.internal container/hello-openshift-canary container exited with code 2 (Error): serving on 8888\nserving on 8080\nServicing request.\n
Apr 10 05:11:02.083 E ns/e2e-sched-pred-4186 pod/e2e-host-exec node/ip-10-0-170-76.us-east-2.compute.internal container/e2e-host-exec container exited with code 2 (Error): 
Apr 10 05:11:02.097 E ns/e2e-sched-pred-4186 pod/pod1 node/ip-10-0-170-76.us-east-2.compute.internal container/agnhost container exited with code 2 (Error): 
Apr 10 05:11:02.110 E ns/e2e-sched-pred-4186 pod/pod2 node/ip-10-0-170-76.us-east-2.compute.internal container/agnhost container exited with code 2 (Error): 
Apr 10 05:23:03.730 E ns/openshift-ingress-canary pod/ingress-canary-c5vcr node/ip-10-0-252-66.us-east-2.compute.internal container/hello-openshift-canary container exited with code 2 (Error): serving on 8888\nserving on 8080\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\n
Apr 10 05:23:03.847 E ns/openshift-ingress-canary pod/ingress-canary-kxd6q node/ip-10-0-170-76.us-east-2.compute.internal container/hello-openshift-canary container exited with code 2 (Error): serving on 8888\nserving on 8080\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\n
Apr 10 05:28:06.812 E clusteroperator/dns changed Degraded to True: DNSDegraded: DNS default is degraded
Apr 10 05:29:12.090 E ns/openshift-monitoring pod/kube-state-metrics-b57b57bb-q945w node/ip-10-0-186-224.us-east-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 10 05:29:12.103 E ns/openshift-marketplace pod/certified-operators-cmhk7 node/ip-10-0-186-224.us-east-2.compute.internal container/registry-server container exited with code 2 (Error): 
Apr 10 05:29:12.124 E ns/openshift-monitoring pod/thanos-querier-5fb54b7cc4-rsq2g node/ip-10-0-186-224.us-east-2.compute.internal container/oauth-proxy container exited with code 2 (Error): 2021/04/10 04:37:55 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2021/04/10 04:37:55 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/10 04:37:55 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/10 04:37:55 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2021/04/10 04:37:55 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/10 04:37:55 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2021/04/10 04:37:55 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/10 04:37:55 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2021/04/10 04:37:55 http.go:107: HTTPS: listening on [::]:9091\nI0410 04:37:55.126302       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 10 05:29:12.189 E ns/openshift-marketplace pod/community-operators-n2xgd node/ip-10-0-186-224.us-east-2.compute.internal container/registry-server container exited with code 2 (Error): 
Apr 10 05:29:12.205 E ns/openshift-monitoring pod/openshift-state-metrics-8f575c844-ll2nz node/ip-10-0-186-224.us-east-2.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
Apr 10 05:29:12.220 E ns/openshift-kube-storage-version-migrator pod/migrator-54bc4b4bfb-q5gtb node/ip-10-0-186-224.us-east-2.compute.internal container/migrator container exited with code 2 (Error): I0410 04:35:17.300763       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0410 04:35:17.300860       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0410 04:35:17.300868       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0410 04:35:17.300877       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0410 04:35:17.300886       1 migrator.go:18] FLAG: --kubeconfig=""\nI0410 04:35:17.300894       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0410 04:35:17.300904       1 migrator.go:18] FLAG: --log_dir=""\nI0410 04:35:17.300911       1 migrator.go:18] FLAG: --log_file=""\nI0410 04:35:17.300916       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0410 04:35:17.300923       1 migrator.go:18] FLAG: --logtostderr="true"\nI0410 04:35:17.300929       1 migrator.go:18] FLAG: --skip_headers="false"\nI0410 04:35:17.300936       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0410 04:35:17.300943       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0410 04:35:17.300949       1 migrator.go:18] FLAG: --v="2"\nI0410 04:35:17.300962       1 migrator.go:18] FLAG: --vmodule=""\n
Apr 10 05:29:12.243 E ns/openshift-monitoring pod/grafana-7b74c48559-tzhgp node/ip-10-0-186-224.us-east-2.compute.internal container/grafana-proxy container exited with code 2 (Error): 
Apr 10 05:29:12.291 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-186-224.us-east-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2021/04/10 04:37:55 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2021/04/10 04:37:55 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/10 04:37:55 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/10 04:37:55 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2021/04/10 04:37:55 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/10 04:37:55 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2021/04/10 04:37:55 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/10 04:37:55 http.go:107: HTTPS: listening on [::]:9095\nI0410 04:37:55.812098       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 10 05:29:12.291 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-186-224.us-east-2.compute.internal container/config-reloader container exited with code 2 (Error): level=info ts=2021-04-10T04:37:55.203299332Z caller=main.go:147 msg="Starting prometheus-config-reloader" version="(version=0.44.1, branch=master, revision=1f0fd51d)"\nlevel=info ts=2021-04-10T04:37:55.203369816Z caller=main.go:148 build_context="(go=go1.15.5, user=root, date=20210118-21:06:52)"\nlevel=info ts=2021-04-10T04:37:55.203566577Z caller=main.go:182 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2021-04-10T04:37:55.20435278Z caller=reloader.go:214 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2021-04-10T04:37:56.873843937Z caller=reloader.go:347 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Apr 10 05:29:12.304 E ns/openshift-marketplace pod/redhat-marketplace-4hncf node/ip-10-0-186-224.us-east-2.compute.internal container/registry-server container exited with code 2 (Error): 
Apr 10 05:29:12.337 E ns/openshift-monitoring pod/prometheus-adapter-6dd9b5d6f7-p966v node/ip-10-0-186-224.us-east-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0410 04:37:25.246112       1 adapter.go:98] successfully using in-cluster auth\nI0410 04:37:26.392548       1 dynamic_cafile_content.go:167] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0410 04:37:26.392590       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0410 04:37:26.393128       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0410 04:37:26.395113       1 secure_serving.go:197] Serving securely on [::]:6443\nI0410 04:37:26.395308       1 tlsconfig.go:240] Starting DynamicServingCertificateController\n
Apr 10 05:29:13.168 E ns/openshift-monitoring pod/telemeter-client-d469dd5c9-jgxmz node/ip-10-0-186-224.us-east-2.compute.internal container/telemeter-client container exited with code 2 (Error): 
Apr 10 05:29:13.168 E ns/openshift-monitoring pod/telemeter-client-d469dd5c9-jgxmz node/ip-10-0-186-224.us-east-2.compute.internal container/reload container exited with code 2 (Error): 
Apr 10 05:29:13.195 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-186-224.us-east-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2021/04/10 04:37:55 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2021/04/10 04:37:55 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/10 04:37:55 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/10 04:37:55 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2021/04/10 04:37:55 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/10 04:37:55 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2021/04/10 04:37:55 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/10 04:37:55 http.go:107: HTTPS: listening on [::]:9095\nI0410 04:37:55.429367       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 10 05:29:13.195 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-186-224.us-east-2.compute.internal container/config-reloader container exited with code 2 (Error): level=info ts=2021-04-10T04:37:54.897991386Z caller=main.go:147 msg="Starting prometheus-config-reloader" version="(version=0.44.1, branch=master, revision=1f0fd51d)"\nlevel=info ts=2021-04-10T04:37:54.898068058Z caller=main.go:148 build_context="(go=go1.15.5, user=root, date=20210118-21:06:52)"\nlevel=info ts=2021-04-10T04:37:54.898269245Z caller=main.go:182 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2021-04-10T04:37:54.899016575Z caller=reloader.go:214 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2021-04-10T04:37:56.551782911Z caller=reloader.go:347 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Apr 10 05:29:13.212 E ns/openshift-monitoring pod/thanos-querier-5fb54b7cc4-zf7hm node/ip-10-0-186-224.us-east-2.compute.internal container/oauth-proxy container exited with code 2 (Error): 2021/04/10 04:37:56 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2021/04/10 04:37:56 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/10 04:37:56 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/10 04:37:56 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2021/04/10 04:37:56 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/10 04:37:56 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2021/04/10 04:37:56 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/10 04:37:56 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2021/04/10 04:37:56 http.go:107: HTTPS: listening on [::]:9091\nI0410 04:37:56.174732       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2021/04/10 04:38:51 server.go:3107: http: TLS handshake error from 10.128.2.6:49200: read tcp 10.131.0.29:9091->10.128.2.6:49200: read: connection reset by peer\n
Apr 10 05:29:13.273 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-186-224.us-east-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2021/04/10 04:38:10 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2021/04/10 04:38:10 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/10 04:38:10 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/10 04:38:10 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2021/04/10 04:38:10 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/10 04:38:10 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2021/04/10 04:38:10 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/10 04:38:10 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2021/04/10 04:38:10 http.go:107: HTTPS: listening on [::]:9091\nI0410 04:38:10.202222       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 10 05:29:13.273 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-186-224.us-east-2.compute.internal container/config-reloader container exited with code 2 (Error): level=info ts=2021-04-10T04:38:09.563394508Z caller=main.go:147 msg="Starting prometheus-config-reloader" version="(version=0.44.1, branch=master, revision=1f0fd51d)"\nlevel=info ts=2021-04-10T04:38:09.563452926Z caller=main.go:148 build_context="(go=go1.15.5, user=root, date=20210118-21:06:52)"\nlevel=info ts=2021-04-10T04:38:09.563704062Z caller=main.go:182 msg="Starting web server for metrics" listen=localhost:8080\nlevel=error ts=2021-04-10T04:38:09.569227635Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9090/-/reload\": dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2021-04-10T04:38:15.037041562Z caller=reloader.go:347 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2021-04-10T04:38:15.037155171Z caller=reloader.go:214 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2021-04-10T04:39:15.908079542Z caller=reloader.go:347 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2021-04-10T04:41:34.859377797Z caller=reloader.go:347 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n
Apr 10 05:29:13.329 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-186-224.us-east-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2021/04/10 04:37:55 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2021/04/10 04:37:55 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/10 04:37:55 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/10 04:37:55 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2021/04/10 04:37:55 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/10 04:37:55 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2021/04/10 04:37:55 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/10 04:37:55 http.go:107: HTTPS: listening on [::]:9095\nI0410 04:37:55.910835       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 10 05:29:13.329 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-186-224.us-east-2.compute.internal container/config-reloader container exited with code 2 (Error): level=info ts=2021-04-10T04:37:55.242917547Z caller=main.go:147 msg="Starting prometheus-config-reloader" version="(version=0.44.1, branch=master, revision=1f0fd51d)"\nlevel=info ts=2021-04-10T04:37:55.242999958Z caller=main.go:148 build_context="(go=go1.15.5, user=root, date=20210118-21:06:52)"\nlevel=info ts=2021-04-10T04:37:55.243205284Z caller=main.go:182 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2021-04-10T04:37:55.244128217Z caller=reloader.go:214 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2021-04-10T04:37:57.000353121Z caller=reloader.go:347 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Apr 10 05:29:42.540 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-21.us-east-2.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2021-04-10T05:29:39.473Z caller=main.go:289 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Apr 10 05:29:57.001 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-76.us-east-2.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2021-04-10T05:29:48.284Z caller=main.go:289 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Apr 10 05:39:11.595 E ns/openshift-monitoring pod/kube-state-metrics-b57b57bb-4v8cl node/ip-10-0-252-66.us-east-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 10 05:39:11.614 E ns/openshift-ingress-canary pod/ingress-canary-jnw4b node/ip-10-0-252-66.us-east-2.compute.internal container/hello-openshift-canary container exited with code 2 (Error): serving on 8888\nserving on 8080\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\n
Apr 10 05:41:43.859 E ns/openshift-authentication pod/oauth-openshift-5b95ccc67c-nfh25 node/ip-10-0-145-237.us-east-2.compute.internal container/oauth-openshift container exited with code 2 (Error): .com/openshift/oauth-server/pkg/oauthserver.(*OAuthServerConfig).buildHandlerChainForOAuth(0xc000503680, 0x20497a0, 0xc000817080, 0xc000200fc0, 0x1ab2be0, 0xc000b1b580)\n	github.com/openshift/oauth-server/pkg/oauthserver/oauth_apiserver.go:307 +0xf0\nk8s.io/apiserver/pkg/server.completedConfig.New.func1(0x20497a0, 0xc000817080, 0x20497a0, 0xc000817080)\n	k8s.io/apiserver@v0.19.2/pkg/server/config.go:600 +0x45\nk8s.io/apiserver/pkg/server.NewAPIServerHandler(0x1d59398, 0xf, 0x208caa0, 0xc000839e00, 0xc000b1b7c0, 0x0, 0x0, 0x1c27e40)\n	k8s.io/apiserver@v0.19.2/pkg/server/handler.go:96 +0x285\nk8s.io/apiserver/pkg/server.completedConfig.New(0xc000200fc0, 0x0, 0x0, 0x1d59398, 0xf, 0x20a1c60, 0x2ce9d08, 0x0, 0x0, 0x0)\n	k8s.io/apiserver@v0.19.2/pkg/server/config.go:602 +0x129\ngithub.com/openshift/oauth-server/pkg/oauthserver.completedOAuthConfig.New(0xc000817000, 0xc000503688, 0x20a1c60, 0x2ce9d08, 0x3, 0x4, 0x208a760)\n	github.com/openshift/oauth-server/pkg/oauthserver/oauth_apiserver.go:290 +0x70\ngithub.com/openshift/oauth-server/pkg/cmd/oauth-server.RunOsinServer(0xc0006f2600, 0xc000704300, 0xcaa, 0xeaa)\n	github.com/openshift/oauth-server/pkg/cmd/oauth-server/server.go:41 +0x99\ngithub.com/openshift/oauth-server/pkg/cmd/oauth-server.(*OsinServer).RunOsinServer(0xc000700350, 0xc000704300, 0x6007a5, 0x1ac8900)\n	github.com/openshift/oauth-server/pkg/cmd/oauth-server/cmd.go:91 +0x29b\ngithub.com/openshift/oauth-server/pkg/cmd/oauth-server.NewOsinServer.func1(0xc000160dc0, 0xc000816da0, 0x0, 0x2)\n	github.com/openshift/oauth-server/pkg/cmd/oauth-server/cmd.go:39 +0x109\ngithub.com/spf13/cobra.(*Command).execute(0xc000160dc0, 0xc000816d80, 0x2, 0x2, 0xc000160dc0, 0xc000816d80)\n	github.com/spf13/cobra@v1.0.0/command.go:846 +0x2c2\ngithub.com/spf13/cobra.(*Command).ExecuteC(0xc000160580, 0xc000160580, 0x0, 0x0)\n	github.com/spf13/cobra@v1.0.0/command.go:950 +0x375\ngithub.com/spf13/cobra.(*Command).Execute(...)\n	github.com/spf13/cobra@v1.0.0/command.go:887\nmain.main()\n	github.com/openshift/oauth-server/cmd/oauth-server/main.go:41 +0x2f3\n
Apr 10 05:41:46.307 E ns/e2e-daemonsets-9031 pod/daemon-set-7rl2x node/ip-10-0-252-66.us-east-2.compute.internal reason/Failed (): 
Apr 10 05:46:10.384 E ns/e2e-test-prometheus-d45qp pod/execpod node/ip-10-0-170-76.us-east-2.compute.internal container/agnhost-container container exited with code 137 (Error):