ResultSUCCESS
Tests 1 failed / 55 succeeded
Started2019-11-01 17:57
Elapsed2h11m
Work namespaceci-op-c0yxlphn
pod4.3.0-0.nightly-2019-11-01-175335-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h8m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
19 error level events were detected during this test run:

Nov 01 18:47:24.819 E ns/openshift-ingress pod/router-default-55c5fb6f5d-xbtzl node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus2-bqgsk container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:37:56.316Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:38:01.717Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:38:06.703Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:38:18.303Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:39:10.058Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:39:21.453Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:40:15.494Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:40:20.448Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:41:33.604Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:41:38.846Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:47:21.220Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 01 18:47:24.894 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-8vrrb node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus2-bqgsk container=kube-state-metrics container exited with code 2 (Error): 
Nov 01 19:06:43.485 E ns/openshift-image-registry pod/node-ca-vznxw node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus2-bqgsk container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 19:17:15.703 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 01 19:18:45.411 E ns/openshift-ingress pod/router-default-55c5fb6f5d-srn62 node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-86ph5 container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:11:01.396Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:11:32.694Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:11:37.669Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:11:44.933Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:11:49.892Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:17:53.294Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:17:58.288Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:18:19.487Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:18:24.472Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:18:29.472Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:18:37.632Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 01 19:18:46.548 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-4bmsz node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-86ph5 container=kube-state-metrics container exited with code 2 (Error): 
Nov 01 19:18:46.586 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-86ph5 container=config-reloader container exited with code 2 (Error): 2019/11/01 18:37:27 Watching directory: "/etc/alertmanager/config"\n
Nov 01 19:18:46.586 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-86ph5 container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 18:37:27 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:37:27 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:37:27 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:37:27 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 18:37:27 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:37:27 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:37:27 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:37:27 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 19:18:46.601 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-86ph5 container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/01 18:36:58 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 01 19:18:46.601 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-86ph5 container=prometheus-proxy container exited with code 2 (Error): roxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/01 18:37:11 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:37:11 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 18:37:11 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:37:11 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 18:37:11 http.go:96: HTTPS: listening on [::]:9091\n2019/11/01 18:37:17 oauthproxy.go:774: basicauth: 10.129.2.5:57730 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 18:41:47 oauthproxy.go:774: basicauth: 10.129.2.5:59522 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 18:46:18 oauthproxy.go:774: basicauth: 10.129.2.5:60770 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 18:50:49 oauthproxy.go:774: basicauth: 10.129.2.5:35142 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 18:55:19 oauthproxy.go:774: basicauth: 10.129.2.5:38224 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 18:59:50 oauthproxy.go:774: basicauth: 10.129.2.5:41496 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 19:04:20 oauthproxy.go:774: basicauth: 10.129.2.5:44834 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 19:08:50 oauthproxy.go:774: basicauth: 10.129.2.5:47918 Authorization header does not start with 'Basic', skipping basic authentication\n2
Nov 01 19:18:46.601 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-86ph5 container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-01T18:36:58.045189628Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-01T18:36:58.04528753Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-01T18:36:58.046530858Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-01T18:37:03.049165139Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-01T18:37:08.046632905Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-01T18:37:13.046964034Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-01T18:37:18.232478894Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-01T18:38:42.062159166Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 01 19:18:46.652 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-86ph5 container=config-reloader container exited with code 2 (Error): 2019/11/01 18:53:09 Watching directory: "/etc/alertmanager/config"\n
Nov 01 19:18:46.652 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-86ph5 container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 18:53:12 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:53:12 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:53:12 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:53:12 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 18:53:12 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:53:12 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:53:12 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:53:12 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 19:18:46.692 E ns/openshift-monitoring pod/prometheus-adapter-7f96c9c7d5-cpsgt node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-86ph5 container=prometheus-adapter container exited with code 2 (Error): I1101 18:35:32.384643       1 adapter.go:93] successfully using in-cluster auth\nI1101 18:35:34.026667       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 19:19:20.489 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 01 19:20:17.935 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus3-jswsd container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T19:19:32.022Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T19:19:32.047Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T19:19:32.047Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T19:19:32.049Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T19:19:32.049Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T19:19:32.049Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T19:19:32.049Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T19:19:32.049Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T19:19:32.049Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T19:19:32.049Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T19:19:32.049Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T19:19:32.049Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T19:19:32.049Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T19:19:32.050Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T19:19:32.054Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T19:19:32.054Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 19:22:38.878 E ns/openshift-machine-config-operator pod/machine-config-daemon-mjnxp node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus2-bqgsk container=oauth-proxy container exited with code 143 (Error): 
Nov 01 19:22:38.916 E ns/openshift-ingress pod/router-default-55c5fb6f5d-4v8j5 node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus2-bqgsk container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:19:09.632Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:19:14.642Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:19:19.635Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:19:24.643Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:19:50.612Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:19:55.586Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:20:08.668Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:20:13.624Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:20:20.869Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:20:25.807Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:21:20.132Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 01 19:29:38.902 E ns/openshift-image-registry pod/node-ca-6tlsf node/ci-op-c0yxlphn-09f59-t2mmm-worker-centralus2-bqgsk container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20191101-195018.xml

Find was mentions in log files


Show 55 Passed Tests

Show 165 Skipped Tests