ResultSUCCESS
Tests 1 failed / 55 succeeded
Started2019-11-08 00:01
Elapsed2h27m
Work namespaceci-op-w8f7ntgf
pod4.3.0-0.nightly-2019-11-07-235426-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h13m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
15 error level events were detected during this test run:

Nov 08 01:18:22.884 E ns/openshift-image-registry pod/node-ca-przxk node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:18:24.786 E ns/openshift-machine-config-operator pod/machine-config-daemon-sjr76 node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:18:24.786 E ns/openshift-machine-config-operator pod/machine-config-daemon-sjr76 node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:20:50.195 E ns/openshift-image-registry pod/node-ca-v4prb node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:22:34.624 E ns/openshift-image-registry pod/node-ca-nclp5 node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:33:28.527 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 01:33:38.784 E ns/openshift-monitoring pod/grafana-85bf74556f-kvrwb node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus3-7llzc container=grafana-proxy container exited with code 2 (Error): 
Nov 08 01:33:38.821 E ns/openshift-ingress pod/router-default-6dcfb5f766-mwblg node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus3-7llzc container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:24:28.022Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:24:52.888Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:29:58.615Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:31:01.589Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:31:06.586Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:31:30.706Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:31:48.946Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:32:58.598Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:33:10.596Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:33:16.283Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T01:33:21.280Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 01:34:37.218 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus3-n2zkr container=prometheus container exited with code 1 (Error): caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T01:34:18.974Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T01:34:18.984Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T01:34:18.985Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T01:34:18.986Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T01:34:18.986Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T01:34:18.986Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T01:34:18.986Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T01:34:18.986Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T01:34:18.986Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T01:34:18.986Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T01:34:18.986Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T01:34:18.986Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T01:34:18.986Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T01:34:18.986Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T01:34:18.991Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T01:34:18.991Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 01:37:10.408 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:37:10.408 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:37:10.408 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:47:07.685 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:47:07.685 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:47:07.685 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-w8f7ntgf-09f59-q2c5s-worker-centralus2-c8xmn container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20191108-021149.xml

Find was mentions in log files


Show 55 Passed Tests

Show 173 Skipped Tests