ResultFAILURE
Tests 3 failed / 53 succeeded
Started2019-11-08 09:49
Elapsed2h31m
Work namespaceci-op-7i03490c
pod4.3.0-0.nightly-2019-11-08-094604-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h20m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
24 error level events were detected during this test run:

Nov 08 10:58:53.870 E ns/openshift-ingress pod/router-default-595c859f98-5w8k6 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:56.812 E ns/openshift-ingress pod/router-default-595c859f98-w78dt node/ci-op-7i03490c-09f59-zzxg9-worker-westus-r85gh container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:35:03.015Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:35:07.997Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:35:27.873Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:35:32.825Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:36:46.129Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:36:54.675Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:48:15.429Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:48:20.448Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:48:43.891Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:48:48.834Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T10:58:53.241Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 10:58:56.936 E ns/openshift-machine-config-operator pod/machine-config-daemon-ls4r8 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-r85gh container=oauth-proxy container exited with code 143 (Error): 
Nov 08 10:58:57.621 E ns/openshift-monitoring pod/grafana-85bf74556f-kqv6c node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=grafana-proxy container exited with code 2 (Error): 
Nov 08 10:58:57.694 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T10:29:46.363Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T10:29:46.378Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T10:29:46.378Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T10:29:46.379Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T10:29:46.379Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T10:29:46.379Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T10:29:46.379Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T10:29:46.379Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T10:29:46.379Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T10:29:46.379Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T10:29:46.379Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T10:29:46.380Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T10:29:46.380Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T10:29:46.380Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T10:29:46.390Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T10:29:46.390Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 10:58:57.694 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 10:29:54 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2019/11/08 10:35:34 config map updated\n2019/11/08 10:35:34 successfully triggered reload\n
Nov 08 10:58:57.694 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-08T10:29:50.687672176Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-08T10:29:50.688145494Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-08T10:29:50.697755459Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:29:55.690768709Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:30:00.69064772Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:30:05.690371225Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T10:30:10.93625069Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-08T10:30:11.161064038Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-08T10:
Nov 08 10:58:57.721 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:57.721 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:57.721 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:57.770 E ns/openshift-machine-config-operator pod/machine-config-daemon-27kgg node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=oauth-proxy container exited with code 143 (Error): 
Nov 08 10:58:57.918 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-r85gh container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:57.918 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-r85gh container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:57.918 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-r85gh container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:57.918 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-r85gh container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:57.918 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-r85gh container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:57.918 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-r85gh container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:57.918 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-r85gh container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 10:58:58.628 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=config-reloader container exited with code 2 (Error): 2019/11/08 10:32:15 Watching directory: "/etc/alertmanager/config"\n
Nov 08 10:58:58.628 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 10:32:15 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 10:32:15 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 10:32:15 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 10:32:15 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 10:32:15 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 10:32:15 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 10:32:15 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 10:32:15 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 10:59:59.026 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 11:00:11.381 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T11:00:07.871Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T11:00:07.884Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T11:00:07.884Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T11:00:07.885Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T11:00:07.885Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T11:00:07.885Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T11:00:07.885Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T11:00:07.885Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T11:00:07.885Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T11:00:07.885Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T11:00:07.885Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T11:00:07.885Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T11:00:07.885Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T11:00:07.885Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T11:00:07.906Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T11:00:07.906Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 11:00:11.525 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-7i03490c-09f59-zzxg9-worker-westus-v8drn container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T11:00:07.179Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T11:00:07.179Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T11:00:07.180Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T11:00:07.180Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T11:00:07.181Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T11:00:07.183Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T11:00:07.183Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T11:00:07.183Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T11:00:07.183Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T11:00:07.183Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T11:00:07.183Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T11:00:07.183Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T11:00:07.183Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T11:00:07.183Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T11:00:07.191Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T11:00:07.192Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 11:48:19.636 E ns/openshift-authentication pod/oauth-openshift-67696765bc-bkxd9 node/ci-op-7i03490c-09f59-zzxg9-master-1 container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20191108-120052.xml

Find was mentions in log files


openshift-tests [Feature:Machines][Serial] Managed cluster should grow and decrease when scaling different machineSets simultaneously [Suite:openshift/conformance/serial] 12m5s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Machines\]\[Serial\]\sManaged\scluster\sshould\sgrow\sand\sdecrease\swhen\sscaling\sdifferent\smachineSets\ssimultaneously\s\[Suite\:openshift\/conformance\/serial\]$'
fail [github.com/openshift/origin/test/extended/machines/scale.go:201]: Timed out after 720.002s.
Expected
    <bool>: false
to be true