ResultSUCCESS
Tests 1 failed / 54 succeeded
Started2019-11-06 18:54
Elapsed1h56m
Work namespaceci-op-q9nnwrts
pod4.3.0-0.nightly-2019-11-06-184828-openstack-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h5m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
24 error level events were detected during this test run:

Nov 06 19:34:00.542 E ns/openshift-monitoring pod/prometheus-adapter-7f866f5595-c5bkt node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=prometheus-adapter container exited with code 2 (Error): I1106 19:25:38.174510       1 adapter.go:93] successfully using in-cluster auth\nI1106 19:25:38.827190       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 06 19:34:00.565 E ns/openshift-ingress pod/router-default-6f898df667-l5sts node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:29:48.542Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:29:53.524Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:29:58.526Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:30:30.976Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:30:35.981Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:30:40.982Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:31:52.619Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:31:57.605Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:32:02.604Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:32:39.128Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T19:33:58.341Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 06 19:34:00.600 E ns/openshift-marketplace pod/community-operators-5888574794-nvrjm node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=community-operators container exited with code 2 (Error): 
Nov 06 19:34:00.617 E ns/openshift-monitoring pod/openshift-state-metrics-674774b7c4-xzdm6 node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=openshift-state-metrics container exited with code 2 (Error): 
Nov 06 19:34:00.695 E ns/openshift-monitoring pod/thanos-querier-856bcdb497-lrrvn node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=oauth-proxy container exited with code 2 (Error): 2019/11/06 19:28:09 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/06 19:28:09 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 19:28:09 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 19:28:09 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/06 19:28:09 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 19:28:09 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/06 19:28:09 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 19:28:09 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/06 19:28:09 http.go:96: HTTPS: listening on [::]:9091\n
Nov 06 19:34:00.720 E ns/openshift-marketplace pod/certified-operators-5d77cfd979-zg7nl node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=certified-operators container exited with code 2 (Error): 
Nov 06 19:34:00.781 E ns/openshift-machine-config-operator pod/machine-config-daemon-mmvrv node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=oauth-proxy container exited with code 143 (Error): 
Nov 06 19:34:00.817 E ns/openshift-monitoring pod/prometheus-k8s-0 node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-06T19:28:20.903Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-06T19:28:20.909Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-06T19:28:20.910Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-06T19:28:20.911Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-06T19:28:20.911Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-06T19:28:20.911Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-06T19:28:20.911Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-06T19:28:20.911Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-06T19:28:20.911Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-06T19:28:20.911Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-06T19:28:20.911Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-06T19:28:20.911Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-06T19:28:20.911Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-06T19:28:20.911Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-06T19:28:20.914Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-06T19:28:20.914Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-06
Nov 06 19:34:00.817 E ns/openshift-monitoring pod/prometheus-k8s-0 node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/06 19:28:24 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 06 19:34:00.817 E ns/openshift-monitoring pod/prometheus-k8s-0 node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-06T19:28:23.478255651Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-06T19:28:23.47841068Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-06T19:28:23.501060879Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-06T19:28:28.629426621Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-06T19:31:28.782039454Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 06 19:34:01.619 E ns/openshift-monitoring pod/grafana-6979ff6d7d-h4rq5 node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=grafana-proxy container exited with code 2 (Error): 
Nov 06 19:34:01.654 E ns/openshift-monitoring pod/alertmanager-main-0 node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=config-reloader container exited with code 2 (Error): 2019/11/06 19:29:17 Watching directory: "/etc/alertmanager/config"\n
Nov 06 19:34:01.654 E ns/openshift-monitoring pod/alertmanager-main-0 node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=alertmanager-proxy container exited with code 2 (Error): rviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 19:29:17 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 19:29:17 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 19:29:17 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/06 19:29:17 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 19:29:17 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 19:29:17 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 19:29:17 http.go:96: HTTPS: listening on [::]:9095\n2019/11/06 19:29:21 oauthproxy.go:782: requestauth: 10.131.0.20:49402 invalid bearer token\n2019/11/06 19:29:26 oauthproxy.go:782: requestauth: 10.131.0.20:49402 invalid bearer token\n2019/11/06 19:29:46 oauthproxy.go:782: requestauth: 10.131.0.20:49402 invalid bearer token\n2019/11/06 19:30:00 oauthproxy.go:782: requestauth: 10.131.0.20:49402 invalid bearer token\n2019/11/06 19:31:31 oauthproxy.go:782: requestauth: 10.131.0.20:52668 invalid bearer token\n2019/11/06 19:31:42 oauthproxy.go:782: requestauth: 10.128.2.6:57280 invalid bearer token\n2019/11/06 19:32:00 oauthproxy.go:782: requestauth: 10.128.2.6:57280 invalid bearer token\n2019/11/06 19:32:01 oauthproxy.go:782: requestauth: 10.128.2.6:57280 invalid bearer token\n2019/11/06 19:32:31 oauthproxy.go:782: requestauth: 10.128.2.6:57280 invalid bearer token\n2019/11/06 19:32:34 oauthproxy.go:782: requestauth: 10.128.2.6:57280 invalid bearer token\n2019/11/06 19:33:00 oauthproxy.go:782: requestauth: 10.131.0.20:52668 invalid bearer token\n2019/11/06 19:33:04 oauthproxy.go:782: requestauth: 10.131.0.20:52668 invalid bearer token\n2019/11/06 19:33:30 oauthproxy.go:782: requestauth: 10.128.2.6:57280 invalid bearer token\n
Nov 06 19:34:01.673 E ns/openshift-monitoring pod/prometheus-adapter-7f866f5595-b2m6q node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=prometheus-adapter container exited with code 2 (Error): I1106 19:25:38.820556       1 adapter.go:93] successfully using in-cluster auth\nI1106 19:25:40.660340       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 06 19:34:01.712 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-qxqtj node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=kube-state-metrics container exited with code 2 (Error): 
Nov 06 19:34:01.753 E ns/openshift-marketplace pod/redhat-operators-c747997b-2kbc6 node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=redhat-operators container exited with code 2 (Error): 
Nov 06 19:34:30.496 E ns/openshift-monitoring pod/prometheus-k8s-0 node/q9nnwrts-9f2ed-8kc9k-worker-q4f7p container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-06T19:34:25.007Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-06T19:34:25.026Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-06T19:34:25.027Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-06T19:34:25.028Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-06T19:34:25.028Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-06T19:34:25.028Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-06T19:34:25.028Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-06T19:34:25.028Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-06T19:34:25.028Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-06T19:34:25.028Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-06T19:34:25.029Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-06T19:34:25.029Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-06T19:34:25.029Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-06T19:34:25.029Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-06T19:34:25.034Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-06T19:34:25.034Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-06
Nov 06 19:38:21.676 E ns/openshift-ingress pod/router-default-6f898df667-9bzr2 node/q9nnwrts-9f2ed-8kc9k-worker-rq7hl container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 19:42:28.167 E ns/openshift-machine-config-operator pod/machine-config-daemon-pm74x node/q9nnwrts-9f2ed-8kc9k-worker-rq7hl container=oauth-proxy container exited with code 143 (Error): 
Nov 06 19:57:25.494 E ns/openshift-image-registry pod/node-ca-98wxk node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 20:13:58.448 E ns/openshift-machine-config-operator pod/machine-config-daemon-4h9zh node/q9nnwrts-9f2ed-8kc9k-worker-rq7hl container=oauth-proxy container exited with code 143 (Error): 
Nov 06 20:13:58.492 E ns/openshift-ingress pod/router-default-6f898df667-pr6jr node/q9nnwrts-9f2ed-8kc9k-worker-rq7hl container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:09:55.239Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:10:11.706Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:10:16.695Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:10:30.096Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:10:35.084Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:10:56.284Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:11:01.278Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:11:06.282Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:11:11.276Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:11:19.165Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T20:11:24.188Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 06 20:27:16.849 E ns/openshift-machine-config-operator pod/machine-config-daemon-fjwft node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 20:27:16.849 E ns/openshift-machine-config-operator pod/machine-config-daemon-fjwft node/q9nnwrts-9f2ed-8kc9k-worker-vz7wg container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20191106-203726.xml

Find was mentions in log files


Show 54 Passed Tests

Show 173 Skipped Tests