ResultSUCCESS
Tests 1 failed / 55 succeeded
Started2019-11-07 11:35
Elapsed2h19m
Work namespaceci-op-rlvywc4k
pod4.3.0-0.nightly-2019-11-07-113138-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h8m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
28 error level events were detected during this test run:

Nov 07 12:40:39.705 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 07 12:41:16.079 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 07 12:41:20.201 E ns/openshift-ingress pod/router-default-66f647f96f-ch2ll node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-flh97 container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:27:49.088Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:30:07.610Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:30:12.808Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:40:16.613Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:40:21.614Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:40:33.035Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:40:38.032Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:40:44.268Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:41:03.714Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:41:08.797Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:41:13.720Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 07 12:41:21.249 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-flh97 container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 12:41:21.249 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-flh97 container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 12:41:21.249 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-flh97 container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 12:41:21.249 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-flh97 container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 12:41:21.249 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-flh97 container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 12:41:21.249 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-flh97 container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 12:41:21.249 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-flh97 container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 12:41:21.310 E ns/openshift-monitoring pod/telemeter-client-8494ff7bb4-jx75g node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-flh97 container=telemeter-client container exited with code 2 (Error): 
Nov 07 12:41:21.310 E ns/openshift-monitoring pod/telemeter-client-8494ff7bb4-jx75g node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-flh97 container=reload container exited with code 2 (Error): 
Nov 07 12:41:37.456 E ns/openshift-ingress pod/router-default-66f647f96f-cdtff node/ci-op-rlvywc4k-09f59-859wx-worker-centralus2-8gjmf container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 12:42:15.182 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus1-zkd2n container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-07T12:41:52.364Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-07T12:41:52.368Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-07T12:41:52.380Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-07T12:41:52.383Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-07T12:41:52.383Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-07T12:41:52.383Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-07T12:41:52.383Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-07T12:41:52.383Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-07T12:41:52.384Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-07T12:41:52.384Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-07T12:41:52.384Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-07T12:41:52.384Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-07T12:41:52.384Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-07T12:41:52.387Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-07T12:41:52.570Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-07T12:41:52.570Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-07
Nov 07 12:55:25.659 E ns/openshift-ingress pod/router-default-66f647f96f-qs257 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=router container exited with code 2 (Error):  " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\nE1107 12:42:30.334561       1 limiter.go:140] error reloading router: waitid: no child processes\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n2019-11-07T12:42:35.308Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:42:40.304Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:42:45.312Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:43:07.890Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:43:12.881Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:44:29.467Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:46:20.987Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:46:25.959Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:46:31.041Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T12:46:35.975Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 07 12:55:26.672 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-8nqrq node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=kube-state-metrics container exited with code 2 (Error): 
Nov 07 12:55:26.765 E ns/openshift-marketplace pod/redhat-operators-6469f85cd6-9xc44 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=redhat-operators container exited with code 2 (Error): 
Nov 07 12:55:26.787 E ns/openshift-monitoring pod/openshift-state-metrics-6c78647cc7-vtmgf node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=openshift-state-metrics container exited with code 2 (Error): 
Nov 07 12:55:26.824 E ns/openshift-marketplace pod/community-operators-b46664546-rzd5g node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=community-operators container exited with code 2 (Error): 
Nov 07 12:55:26.864 E ns/openshift-marketplace pod/certified-operators-57488fd75f-sc9bv node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=certified-operators container exited with code 2 (Error): 
Nov 07 12:55:26.929 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=config-reloader container exited with code 2 (Error): 2019/11/07 12:18:44 Watching directory: "/etc/alertmanager/config"\n
Nov 07 12:55:26.929 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 12:18:44 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 12:18:44 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 12:18:44 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 12:18:44 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 12:18:44 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 12:18:44 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 12:18:44 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 12:18:44 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 12:55:26.957 E ns/openshift-machine-config-operator pod/machine-config-daemon-8lvvv node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=oauth-proxy container exited with code 143 (Error): 
Nov 07 13:06:20.137 E ns/openshift-dns pod/dns-default-shwgr node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 13:06:20.137 E ns/openshift-dns pod/dns-default-shwgr node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 13:10:21.784 E ns/openshift-dns pod/dns-default-9fmfm node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 13:10:21.784 E ns/openshift-dns pod/dns-default-9fmfm node/ci-op-rlvywc4k-09f59-859wx-worker-centralus3-n8z6g container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 13:29:46.344 E kube-apiserver Kube API started failing: Get https://api.ci-op-rlvywc4k-09f59.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/kube-system?timeout=3s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

				
				Click to see stdout/stderrfrom junit_e2e_20191107-133707.xml

Find was mentions in log files


Show 55 Passed Tests

Show 173 Skipped Tests