ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2019-11-06 17:04
Elapsed1h44m
Work namespaceci-op-7nwf2qjz
pod4.3.0-0.ci-2019-11-06-170148-gcp-serial

Test Failures


openshift-tests Monitor cluster while tests execute 54m27s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
24 error level events were detected during this test run:

Nov 06 17:45:55.784 E ns/openshift-monitoring pod/prometheus-adapter-74ccdcf5b7-nrvcq node/ci-op--kg4jd-w-d-sr9bq.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I1106 17:30:29.966258       1 adapter.go:93] successfully using in-cluster auth\nI1106 17:30:30.432441       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 06 17:45:55.847 E ns/openshift-machine-config-operator pod/machine-config-daemon-bwvbx node/ci-op--kg4jd-w-d-sr9bq.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 17:45:56.943 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--kg4jd-w-d-sr9bq.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/06 17:30:05 Watching directory: "/etc/alertmanager/config"\n
Nov 06 17:45:56.943 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--kg4jd-w-d-sr9bq.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/06 17:30:06 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 17:30:06 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 17:30:06 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 17:30:06 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/06 17:30:06 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 17:30:06 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 17:30:06 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 17:30:06 http.go:96: HTTPS: listening on [::]:9095\n
Nov 06 17:58:47.745 E ns/openshift-machine-config-operator pod/machine-config-daemon-x9mx5 node/ci-op--kg4jd-w-d-sr9bq.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 18:12:36.307 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 06 18:13:20.836 E ns/openshift-marketplace pod/certified-operators-c6448b4f6-57dtq node/ci-op--kg4jd-w-c-9tdzl.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Nov 06 18:13:20.901 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--kg4jd-w-c-9tdzl.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/06 17:29:30 Watching directory: "/etc/alertmanager/config"\n
Nov 06 18:13:20.901 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--kg4jd-w-c-9tdzl.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/06 17:29:30 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 17:29:30 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 17:29:30 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 17:29:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/06 17:29:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 17:29:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 17:29:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 17:29:30 http.go:96: HTTPS: listening on [::]:9095\n
Nov 06 18:13:20.937 E ns/openshift-marketplace pod/community-operators-74b7658499-gkfj7 node/ci-op--kg4jd-w-c-9tdzl.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Nov 06 18:13:21.075 E ns/openshift-ingress pod/router-default-ffd95fb7d-d8wrr node/ci-op--kg4jd-w-c-9tdzl.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T17:58:46.958Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T17:58:59.833Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:08:21.540Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:08:26.537Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:08:39.599Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:08:44.586Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:08:51.270Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:12:57.925Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:13:02.924Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:13:10.774Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:13:15.765Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 06 18:13:22.203 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--kg4jd-w-c-9tdzl.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/06 17:46:16 Watching directory: "/etc/alertmanager/config"\n
Nov 06 18:13:22.203 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--kg4jd-w-c-9tdzl.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/06 17:46:16 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 17:46:16 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 17:46:16 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 17:46:16 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/06 17:46:16 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 17:46:16 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 17:46:16 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 17:46:16 http.go:96: HTTPS: listening on [::]:9095\n
Nov 06 18:13:54.348 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--kg4jd-w-d-sr9bq.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-06T18:13:47.337Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-06T18:13:47.342Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-06T18:13:47.342Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-06T18:13:47.343Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-06T18:13:47.343Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-06T18:13:47.343Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-06T18:13:47.343Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-06T18:13:47.343Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-06T18:13:47.343Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-06T18:13:47.343Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-06T18:13:47.343Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-06T18:13:47.343Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-06T18:13:47.343Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-06T18:13:47.343Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-06T18:13:47.345Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-06T18:13:47.345Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-06
Nov 06 18:14:18.722 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 06 18:20:37.078 E ns/openshift-machine-config-operator pod/machine-config-daemon-m2gmn node/ci-op--kg4jd-w-c-rkgcg.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 18:20:37.078 E ns/openshift-machine-config-operator pod/machine-config-daemon-m2gmn node/ci-op--kg4jd-w-c-rkgcg.c.openshift-gce-devel-ci.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 18:20:37.130 E ns/openshift-ingress pod/router-default-ffd95fb7d-pd54q node/ci-op--kg4jd-w-c-rkgcg.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:17:40.526Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:17:45.525Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:17:53.721Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:17:58.710Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:18:10.999Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:18:15.995Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:18:28.495Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:18:33.490Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:19:14.929Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:19:20.102Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T18:20:35.711Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 06 18:20:37.186 E ns/openshift-monitoring pod/prometheus-adapter-74ccdcf5b7-47k9x node/ci-op--kg4jd-w-c-rkgcg.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I1106 18:13:32.468020       1 adapter.go:93] successfully using in-cluster auth\nI1106 18:13:33.638890       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 06 18:20:37.226 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--kg4jd-w-c-rkgcg.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): -monitoring:alertmanager-main\n2019/11/06 18:13:54 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 18:13:54 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 18:13:54 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/06 18:13:54 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 18:13:54 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 18:13:54 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 18:13:54 http.go:96: HTTPS: listening on [::]:9095\n2019/11/06 18:13:57 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/06 18:13:59 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/06 18:14:00 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/06 18:14:02 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/06 18:14:02 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/06 18:14:04 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/06 18:14:05 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/06 18:14:07 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/06 18:14:09 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/06 18:14:12 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/06 18:14:14 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n
Nov 06 18:20:37.226 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--kg4jd-w-c-rkgcg.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/06 18:13:54 Watching directory: "/etc/alertmanager/config"\n
Nov 06 18:32:43.369 E ns/openshift-machine-config-operator pod/machine-config-daemon-zszcl node/ci-op--kg4jd-w-c-rkgcg.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 18:32:44.415 E ns/openshift-image-registry pod/node-ca-jnnxf node/ci-op--kg4jd-w-c-rkgcg.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 18:35:52.977 E ns/openshift-machine-config-operator pod/machine-config-daemon-gx2jk node/ci-op--kg4jd-w-c-rkgcg.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20191106-183713.xml

Find was mentions in log files


Show 57 Passed Tests

Show 8 Skipped Tests