ResultSUCCESS
Tests 1 failed / 55 succeeded
Started2019-11-08 03:42
Elapsed2h8m
Work namespaceci-op-5g2q6wci
pod4.3.0-0.nightly-2019-11-08-033856-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h8m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
40 error level events were detected during this test run:

Nov 08 04:26:37.511 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-ffwq5 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=kube-state-metrics container exited with code 2 (Error): 
Nov 08 04:26:37.622 E ns/openshift-marketplace pod/certified-operators-69648cf96c-9w7b5 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=certified-operators container exited with code 2 (Error): 
Nov 08 04:26:37.684 E ns/openshift-marketplace pod/redhat-operators-65f4589bc6-kdkts node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=redhat-operators container exited with code 2 (Error): 
Nov 08 04:26:37.703 E ns/openshift-ingress pod/router-default-544b7557ff-62dd4 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=router container exited with code 2 (Error):  " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:20:14.522Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:20:19.492Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:20:24.500Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\nE1108 04:20:36.147530       1 limiter.go:140] error reloading router: waitid: no child processes\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n2019-11-08T04:20:41.051Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:21:12.490Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:21:17.471Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:21:55.993Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:22:30.329Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:22:36.765Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:26:32.916Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 04:26:37.723 E ns/openshift-monitoring pod/openshift-state-metrics-6c78647cc7-25spf node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=openshift-state-metrics container exited with code 2 (Error): 
Nov 08 04:26:37.755 E ns/openshift-marketplace pod/community-operators-5b5f4b6556-nvw5g node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=community-operators container exited with code 2 (OOMKilled): 
Nov 08 04:26:38.536 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=config-reloader container exited with code 2 (Error): 2019/11/08 04:19:34 Watching directory: "/etc/alertmanager/config"\n
Nov 08 04:26:38.536 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 04:19:34 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 04:19:34 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 04:19:34 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 04:19:34 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 04:19:34 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 04:19:34 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 04:19:34 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 04:19:34 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 04:36:37.902 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 04:36:45.772 E ns/openshift-ingress pod/router-default-544b7557ff-zw66q node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus1-4c4sl container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:27:29.411Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:27:40.711Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:28:06.880Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:34:25.875Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:34:30.875Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:35:00.447Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:35:15.355Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:35:25.642Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:36:21.668Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:36:26.670Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:36:31.675Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 04:36:45.807 E ns/openshift-marketplace pod/redhat-operators-65f4589bc6-kt97f node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus1-4c4sl container=redhat-operators container exited with code 2 (Error): 
Nov 08 04:36:46.826 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus1-4c4sl container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 04:19:55 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 08 04:36:46.826 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus1-4c4sl container=prometheus-proxy container exited with code 2 (Error): 2019/11/08 04:20:01 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 04:20:01 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 04:20:01 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 04:20:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 04:20:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 04:20:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 04:20:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 04:20:01 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 04:20:01 http.go:96: HTTPS: listening on [::]:9091\n
Nov 08 04:36:46.826 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus1-4c4sl container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-08T04:19:55.595948066Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-08T04:19:55.596320567Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-08T04:19:55.599201376Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T04:20:00.59796683Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T04:20:05.597921688Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T04:20:10.793724055Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-08T04:21:29.717150935Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 08 04:36:46.862 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus1-4c4sl container=config-reloader container exited with code 2 (Error): 2019/11/08 04:20:23 Watching directory: "/etc/alertmanager/config"\n
Nov 08 04:36:46.862 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus1-4c4sl container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 04:20:23 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 04:20:23 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 04:20:23 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 04:20:23 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 04:20:23 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 04:20:23 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 04:20:23 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 04:20:23 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 04:36:46.906 E ns/openshift-monitoring pod/grafana-85bf74556f-tc9pr node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus1-4c4sl container=grafana-proxy container exited with code 2 (Error): 
Nov 08 04:37:01.575 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:37:01.575 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:37:01.575 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-m246d container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:37:16.087 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus3-ks8pb container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:37:16.087 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus3-ks8pb container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:37:16.087 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus3-ks8pb container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:37:16.087 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus3-ks8pb container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:37:16.087 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus3-ks8pb container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:37:16.087 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus3-ks8pb container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:37:16.087 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus3-ks8pb container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:37:57.069 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus1-7rlzc container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T04:37:40.152Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T04:37:40.156Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T04:37:40.157Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T04:37:40.158Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T04:37:40.158Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T04:37:40.158Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T04:37:40.158Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T04:37:40.158Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T04:37:40.158Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T04:37:40.158Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T04:37:40.158Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T04:37:40.158Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T04:37:40.158Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T04:37:40.158Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T04:37:40.162Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T04:37:40.162Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 04:40:13.399 E ns/openshift-image-registry pod/node-ca-q4ngl node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:40:13.622 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:40:13.622 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:40:13.622 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:40:13.728 E ns/openshift-marketplace pod/redhat-operators-65f4589bc6-plm8x node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:40:13.771 E ns/openshift-machine-config-operator pod/machine-config-daemon-5mfd5 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:40:13.771 E ns/openshift-machine-config-operator pod/machine-config-daemon-5mfd5 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:40:22.924 E ns/openshift-dns pod/dns-default-tpt6p node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:40:22.924 E ns/openshift-dns pod/dns-default-tpt6p node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 04:52:34.918 E ns/openshift-image-registry pod/node-ca-pjn4t node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 05:19:06.801 E ns/openshift-dns pod/dns-default-2x9r6 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 05:19:06.801 E ns/openshift-dns pod/dns-default-2x9r6 node/ci-op-5g2q6wci-09f59-cbs9r-worker-centralus2-p5k4g container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20191108-053202.xml

Find was mentions in log files


Show 55 Passed Tests

Show 173 Skipped Tests