ResultSUCCESS
Tests 1 failed / 54 succeeded
Started2019-11-08 08:08
Elapsed2h4m
Work namespaceci-op-p9jw9z2y
pod4.3.0-0.nightly-2019-11-08-080321-openstack-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h4m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
21 error level events were detected during this test run:

Nov 08 09:02:27.106 E ns/openshift-monitoring pod/alertmanager-main-0 node/p9jw9z2y-9f2ed-49mz7-worker-rnw5b container=config-reloader container exited with code 2 (Error): 2019/11/08 08:49:05 Watching directory: "/etc/alertmanager/config"\n
Nov 08 09:02:27.106 E ns/openshift-monitoring pod/alertmanager-main-0 node/p9jw9z2y-9f2ed-49mz7-worker-rnw5b container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 08:49:06 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 08:49:06 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 08:49:06 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 08:49:06 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 08:49:06 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 08:49:06 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 08:49:06 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 08:49:06 http.go:96: HTTPS: listening on [::]:9095\n2019/11/08 08:49:08 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/08 08:49:09 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n
Nov 08 09:11:04.279 E ns/openshift-machine-config-operator pod/machine-config-daemon-r6b76 node/p9jw9z2y-9f2ed-49mz7-worker-rnw5b container=oauth-proxy container exited with code 143 (Error): 
Nov 08 09:18:05.528 E ns/openshift-machine-config-operator pod/machine-config-daemon-6nw2g node/p9jw9z2y-9f2ed-49mz7-worker-rnw5b container=oauth-proxy container exited with code 143 (Error): 
Nov 08 09:43:30.705 E ns/openshift-machine-config-operator pod/machine-config-daemon-8m9sd node/p9jw9z2y-9f2ed-49mz7-worker-rnw5b container=oauth-proxy container exited with code 143 (Error): 
Nov 08 09:58:39.896 E ns/openshift-monitoring pod/thanos-querier-5bc9799bb8-wkrgm node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=oauth-proxy container exited with code 2 (Error): 2019/11/08 08:48:23 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 08:48:23 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 08:48:23 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 08:48:23 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 08:48:23 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 08:48:23 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 08:48:23 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 08:48:23 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 08:48:23 http.go:96: HTTPS: listening on [::]:9091\n
Nov 08 09:58:40.074 E ns/openshift-monitoring pod/telemeter-client-8556bfbbb9-bx2c6 node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=reload container exited with code 2 (Error): 
Nov 08 09:58:40.074 E ns/openshift-monitoring pod/telemeter-client-8556bfbbb9-bx2c6 node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=telemeter-client container exited with code 2 (Error): 
Nov 08 09:58:40.135 E ns/openshift-ingress pod/router-default-bdb7795df-p2d7j node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=router container exited with code 2 (Error): ": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T09:42:07.609Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\nE1108 09:42:28.237367       1 limiter.go:140] error reloading router: wait: no child processes\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n2019-11-08T09:42:33.231Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T09:42:48.274Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T09:42:53.279Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T09:43:30.061Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T09:43:35.059Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T09:43:40.068Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T09:44:00.518Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T09:58:29.513Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T09:58:34.510Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 09:58:40.173 E ns/openshift-marketplace pod/redhat-operators-7b77f58cc6-wddfs node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=redhat-operators container exited with code 2 (Error): 
Nov 08 09:58:42.091 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-t6pr7 node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=kube-state-metrics container exited with code 2 (Error): 
Nov 08 09:58:42.112 E ns/openshift-marketplace pod/community-operators-69db74687d-srd9m node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=community-operators container exited with code 2 (Error): 
Nov 08 09:58:43.086 E ns/openshift-monitoring pod/prometheus-adapter-5b74c895c7-ldsxj node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=prometheus-adapter container exited with code 2 (Error): I1108 08:46:53.553428       1 adapter.go:93] successfully using in-cluster auth\nI1108 08:46:54.597346       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 09:58:43.102 E ns/openshift-monitoring pod/openshift-state-metrics-6c78647cc7-26lfq node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=openshift-state-metrics container exited with code 2 (Error): 
Nov 08 09:58:43.120 E ns/openshift-monitoring pod/prometheus-adapter-5b74c895c7-fhcz9 node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=prometheus-adapter container exited with code 2 (Error): I1108 08:46:54.850124       1 adapter.go:93] successfully using in-cluster auth\nI1108 08:46:55.804998       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 09:58:43.138 E ns/openshift-monitoring pod/alertmanager-main-0 node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=config-reloader container exited with code 2 (Error): 2019/11/08 09:02:46 Watching directory: "/etc/alertmanager/config"\n
Nov 08 09:58:43.138 E ns/openshift-monitoring pod/alertmanager-main-0 node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 09:02:46 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 09:02:46 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 09:02:46 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 09:02:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 09:02:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 09:02:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 09:02:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 09:02:46 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 09:58:43.167 E ns/openshift-monitoring pod/prometheus-k8s-1 node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 08:48:36 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2019/11/08 08:50:05 config map updated\n2019/11/08 08:50:06 successfully triggered reload\n
Nov 08 09:58:43.167 E ns/openshift-monitoring pod/prometheus-k8s-1 node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=prometheus-proxy container exited with code 2 (Error): hproxy.go:774: basicauth: 10.128.2.13:34904 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:02:25 oauthproxy.go:774: basicauth: 10.128.2.13:38246 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:06:55 oauthproxy.go:774: basicauth: 10.128.2.13:44290 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:11:26 oauthproxy.go:774: basicauth: 10.128.2.13:50316 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:15:56 oauthproxy.go:774: basicauth: 10.128.2.13:56340 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:20:26 oauthproxy.go:774: basicauth: 10.128.2.13:34446 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:24:56 oauthproxy.go:774: basicauth: 10.128.2.13:40462 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:29:26 oauthproxy.go:774: basicauth: 10.128.2.13:46476 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:33:56 oauthproxy.go:774: basicauth: 10.128.2.13:52514 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:38:27 oauthproxy.go:774: basicauth: 10.128.2.13:58492 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:42:57 oauthproxy.go:774: basicauth: 10.128.2.13:36322 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 09:47:27 oauthproxy.go:774: basicauth: 10.128.2.13:42462 Authorization header does not start with 'Basic', skipping basic authentication\n201
Nov 08 09:58:43.167 E ns/openshift-monitoring pod/prometheus-k8s-1 node/p9jw9z2y-9f2ed-49mz7-worker-hkbht container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-08T08:48:36.050007315Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-08T08:48:36.050198256Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-08T08:48:36.053066716Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T08:48:41.210849938Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-08T08:50:06.14565666Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 08 09:59:18.203 E ns/openshift-monitoring pod/prometheus-k8s-1 node/p9jw9z2y-9f2ed-49mz7-worker-rnw5b container=prometheus container exited with code 1 (Error): caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T09:59:07.310Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T09:59:07.314Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T09:59:07.314Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T09:59:07.315Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T09:59:07.315Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T09:59:07.315Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T09:59:07.315Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T09:59:07.315Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T09:59:07.315Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T09:59:07.315Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T09:59:07.316Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T09:59:07.316Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T09:59:07.316Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T09:59:07.316Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T09:59:07.319Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T09:59:07.319Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08

				
				Click to see stdout/stderrfrom junit_e2e_20191108-100025.xml

Filter through log files


Show 54 Passed Tests

Show 173 Skipped Tests