ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2019-11-06 14:39
Elapsed1h40m
Work namespaceci-op-zkg8ibj0
pod4.3.0-0.ci-2019-11-06-143146-gcp-serial

Test Failures


openshift-tests Monitor cluster while tests execute 54m51s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
29 error level events were detected during this test run:

Nov 06 15:17:19.358 E ns/openshift-ingress pod/router-default-75ddbb59fd-x4pnk node/ci-op--2fz2f-w-c-l7wvt.c.openshift-gce-devel-ci.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 15:17:19.417 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op--2fz2f-w-c-l7wvt.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/06 15:13:15 Watching directory: "/etc/alertmanager/config"\n
Nov 06 15:17:19.417 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op--2fz2f-w-c-l7wvt.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/06 15:13:16 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 15:13:16 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 15:13:16 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 15:13:16 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/06 15:13:16 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 15:13:16 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 15:13:16 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 15:13:16 http.go:96: HTTPS: listening on [::]:9095\n
Nov 06 15:17:19.448 E ns/openshift-monitoring pod/grafana-7667fdbf4f-9n7k8 node/ci-op--2fz2f-w-c-l7wvt.c.openshift-gce-devel-ci.internal container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 15:17:19.448 E ns/openshift-monitoring pod/grafana-7667fdbf4f-9n7k8 node/ci-op--2fz2f-w-c-l7wvt.c.openshift-gce-devel-ci.internal container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 15:24:27.241 E ns/openshift-machine-config-operator pod/machine-config-daemon-9hncr node/ci-op--2fz2f-w-c-l7wvt.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 15:36:47.121 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 06 15:37:43.040 E ns/openshift-marketplace pod/community-operators-6c499ffb4c-njzp4 node/ci-op--2fz2f-w-b-97msr.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Nov 06 15:37:43.052 E ns/openshift-monitoring pod/prometheus-adapter-759b89b944-kk7v9 node/ci-op--2fz2f-w-b-97msr.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I1106 15:07:08.244102       1 adapter.go:93] successfully using in-cluster auth\nI1106 15:07:09.986925       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 06 15:37:43.111 E ns/openshift-monitoring pod/kube-state-metrics-5cf6c7dbb5-29fh5 node/ci-op--2fz2f-w-b-97msr.c.openshift-gce-devel-ci.internal container=kube-state-metrics container exited with code 2 (Error): 
Nov 06 15:37:43.134 E ns/openshift-monitoring pod/grafana-7667fdbf4f-mt4t2 node/ci-op--2fz2f-w-b-97msr.c.openshift-gce-devel-ci.internal container=grafana-proxy container exited with code 2 (Error): 
Nov 06 15:37:43.144 E ns/openshift-marketplace pod/redhat-operators-76fbb79d5f-ss52v node/ci-op--2fz2f-w-b-97msr.c.openshift-gce-devel-ci.internal container=redhat-operators container exited with code 2 (Error): 
Nov 06 15:37:43.192 E ns/openshift-marketplace pod/certified-operators-6789cd97cd-kkzqb node/ci-op--2fz2f-w-b-97msr.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Nov 06 15:37:44.236 E ns/openshift-monitoring pod/thanos-querier-774fb8dbd9-szr4f node/ci-op--2fz2f-w-b-97msr.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2019/11/06 15:17:26 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/06 15:17:26 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 15:17:26 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 15:17:26 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/06 15:17:26 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 15:17:26 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/06 15:17:26 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 15:17:26 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/06 15:17:26 http.go:96: HTTPS: listening on [::]:9091\n
Nov 06 15:37:44.248 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--2fz2f-w-b-97msr.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/06 15:12:46 Watching directory: "/etc/alertmanager/config"\n
Nov 06 15:37:44.248 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--2fz2f-w-b-97msr.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/06 15:12:46 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 15:12:46 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 15:12:46 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 15:12:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/06 15:12:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 15:12:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 15:12:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 15:12:46 http.go:96: HTTPS: listening on [::]:9095\n
Nov 06 15:37:44.289 E ns/openshift-ingress pod/router-default-75ddbb59fd-6kv4t node/ci-op--2fz2f-w-b-97msr.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:25:46.855Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:25:51.848Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:26:05.544Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:26:10.529Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:37:07.564Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:37:12.567Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:37:19.896Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:37:24.897Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:37:29.899Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:37:34.893Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T15:37:39.902Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 06 15:37:51.947 E ns/openshift-monitoring pod/prometheus-adapter-759b89b944-2pnnr node/ci-op--2fz2f-w-c-gs685.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 15:37:51.961 E ns/openshift-marketplace pod/certified-operators-6789cd97cd-nkjmd node/ci-op--2fz2f-w-c-gs685.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 15:37:51.990 E ns/openshift-ingress pod/router-default-75ddbb59fd-mbkzf node/ci-op--2fz2f-w-c-gs685.c.openshift-gce-devel-ci.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 15:38:07.362 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--2fz2f-w-c-l7wvt.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-06T15:38:02.997Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-06T15:38:03.002Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-06T15:38:03.003Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-06T15:38:03.003Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-06T15:38:03.003Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-06T15:38:03.004Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-06T15:38:03.004Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-06T15:38:03.004Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-06T15:38:03.004Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-06T15:38:03.004Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-06T15:38:03.004Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-06T15:38:03.004Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-06T15:38:03.004Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-06T15:38:03.004Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-06T15:38:03.005Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-06T15:38:03.005Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-06
Nov 06 15:38:10.762 E ns/openshift-ingress pod/router-default-75ddbb59fd-rsct7 node/ci-op--2fz2f-w-d-mvsnb.c.openshift-gce-devel-ci.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 15:38:38.296 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 06 15:42:17.178 E ns/openshift-marketplace pod/certified-operators-6789cd97cd-6jkrh node/ci-op--2fz2f-w-b-qqqvl.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Nov 06 15:42:17.210 E ns/openshift-machine-config-operator pod/machine-config-daemon-dvkgg node/ci-op--2fz2f-w-b-qqqvl.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 15:51:15.423 E ns/openshift-machine-config-operator pod/machine-config-daemon-29l8q node/ci-op--2fz2f-w-b-qqqvl.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 15:56:15.923 E ns/openshift-machine-config-operator pod/machine-config-daemon-85z27 node/ci-op--2fz2f-w-b-qqqvl.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 15:58:25.216 E ns/openshift-machine-config-operator pod/machine-config-daemon-swhpc node/ci-op--2fz2f-w-b-qqqvl.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 16:05:45.268 E ns/openshift-marketplace pod/certified-operators-6789cd97cd-db8n7 node/ci-op--2fz2f-w-c-l7wvt.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20191106-160925.xml

Find was mentions in log files


Show 57 Passed Tests

Show 8 Skipped Tests