ResultSUCCESS
Tests 1 failed / 55 succeeded
Started2019-11-07 01:09
Elapsed2h8m
Work namespaceci-op-pg7mtgss
pod4.3.0-0.nightly-2019-11-07-010532-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h8m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
35 error level events were detected during this test run:

Nov 07 02:03:09.064 E ns/openshift-marketplace pod/redhat-operators-64566c9856-wzdq5 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-2h5n9 container=redhat-operators container exited with code 2 (Error): 
Nov 07 02:03:09.108 E ns/openshift-marketplace pod/certified-operators-69589995bf-lgmd5 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-2h5n9 container=certified-operators container exited with code 2 (Error): 
Nov 07 02:03:10.217 E ns/openshift-monitoring pod/prometheus-adapter-6fb7598968-nn4m5 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-2h5n9 container=prometheus-adapter container exited with code 2 (Error): I1107 01:44:39.617894       1 adapter.go:93] successfully using in-cluster auth\nI1107 01:44:41.923893       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 07 02:03:10.235 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-ggpmk node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-2h5n9 container=kube-state-metrics container exited with code 2 (Error): 
Nov 07 02:03:10.275 E ns/openshift-marketplace pod/community-operators-9567c47bd-5jjp8 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-2h5n9 container=community-operators container exited with code 2 (Error): 
Nov 07 02:03:10.297 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-2h5n9 container=config-reloader container exited with code 2 (Error): 2019/11/07 01:46:41 Watching directory: "/etc/alertmanager/config"\n
Nov 07 02:03:10.297 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-2h5n9 container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 01:46:41 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 01:46:41 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 01:46:41 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 01:46:41 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 01:46:41 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 01:46:41 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 01:46:41 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 01:46:41 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 02:03:10.339 E ns/openshift-machine-config-operator pod/machine-config-daemon-f2pd8 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-2h5n9 container=oauth-proxy container exited with code 143 (Error): 
Nov 07 02:03:10.359 E ns/openshift-ingress pod/router-default-68c6467955-x4lmx node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-2h5n9 container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T01:50:57.386Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T01:51:02.371Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T01:51:11.041Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T01:51:16.068Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T01:52:19.090Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T01:52:28.458Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T01:52:52.486Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T01:53:31.151Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T01:59:05.297Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T01:59:10.269Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:03:04.005Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 07 02:11:41.583 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 07 02:13:24.638 E ns/openshift-marketplace pod/certified-operators-69589995bf-9l67b node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-hrgq8 container=certified-operators container exited with code 2 (Error): 
Nov 07 02:13:24.660 E ns/openshift-monitoring pod/telemeter-client-55f55df45-lnv6w node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-hrgq8 container=reload container exited with code 2 (Error): 
Nov 07 02:13:24.660 E ns/openshift-monitoring pod/telemeter-client-55f55df45-lnv6w node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-hrgq8 container=telemeter-client container exited with code 2 (OOMKilled): 
Nov 07 02:13:24.707 E ns/openshift-ingress pod/router-default-68c6467955-pc5mb node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-hrgq8 container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:11:14.967Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:11:19.961Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:11:29.303Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:11:34.933Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:11:47.860Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:12:09.282Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:12:14.004Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:12:22.759Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:12:28.202Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:12:35.837Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-07T02:13:20.237Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 07 02:13:25.574 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-hrgq8 container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/07 01:50:58 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2019/11/07 01:51:08 config map updated\n2019/11/07 01:51:08 error: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused\n
Nov 07 02:13:25.574 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-hrgq8 container=prometheus-proxy container exited with code 2 (Error): 2019/11/07 01:51:04 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/07 01:51:04 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 01:51:04 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 01:51:04 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/07 01:51:04 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 01:51:04 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/07 01:51:04 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 01:51:04 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/07 01:51:04 http.go:96: HTTPS: listening on [::]:9091\n
Nov 07 02:13:25.574 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-hrgq8 container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-07T01:50:58.538241652Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-07T01:50:58.538389853Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-07T01:50:58.541028065Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-07T01:51:03.5403237Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-07T01:51:08.540782441Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-07T01:51:13.860263968Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-07T01:52:35.855807061Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 07 02:13:25.583 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-hrgq8 container=config-reloader container exited with code 2 (Error): 2019/11/07 01:47:15 Watching directory: "/etc/alertmanager/config"\n
Nov 07 02:13:25.583 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-hrgq8 container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 01:47:15 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 01:47:15 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 01:47:15 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 01:47:15 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 01:47:15 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 01:47:15 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 01:47:15 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 01:47:15 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 02:13:46.266 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus3-gvl4g container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 02:13:46.266 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus3-gvl4g container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 02:13:46.266 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus3-gvl4g container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 02:14:03.495 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-sfhqj container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-07T02:13:48.963Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-07T02:13:48.971Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-07T02:13:48.971Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-07T02:13:48.972Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-07T02:13:48.972Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-07T02:13:48.972Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-07T02:13:48.973Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-07T02:13:48.973Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-07T02:13:48.973Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-07T02:13:48.973Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-07T02:13:48.973Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-07T02:13:48.973Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-07T02:13:48.973Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-07T02:13:48.973Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-07T02:13:48.977Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-07T02:13:48.977Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-07
Nov 07 02:17:17.884 E ns/openshift-authentication pod/oauth-openshift-6564f675b6-njdhv node/ci-op-pg7mtgss-09f59-ng62w-master-2 container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 02:22:00.207 E ns/openshift-machine-config-operator pod/machine-config-daemon-l5rhl node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-pvtx9 container=oauth-proxy container exited with code 143 (Error): 
Nov 07 02:22:01.186 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-pvtx9 container=config-reloader container exited with code 2 (Error): 2019/11/07 02:15:51 Watching directory: "/etc/alertmanager/config"\n
Nov 07 02:22:01.186 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-pvtx9 container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 02:15:58 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 02:15:58 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 02:15:58 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 02:15:58 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 02:15:58 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 02:15:58 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 02:15:58 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 02:15:58 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 02:22:01.271 E ns/openshift-monitoring pod/openshift-state-metrics-6c78647cc7-8z6z2 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-pvtx9 container=openshift-state-metrics container exited with code 2 (Error): 
Nov 07 02:23:20.797 E ns/openshift-dns pod/dns-default-s9d7l node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-pvtx9 container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 02:23:20.797 E ns/openshift-dns pod/dns-default-s9d7l node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-pvtx9 container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 02:27:56.707 E ns/openshift-machine-config-operator pod/machine-config-daemon-6hnt5 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-pvtx9 container=oauth-proxy container exited with code 143 (Error): 
Nov 07 02:30:58.538 E ns/openshift-machine-config-operator pod/machine-config-daemon-hjjm4 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-pvtx9 container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 02:30:58.538 E ns/openshift-machine-config-operator pod/machine-config-daemon-hjjm4 node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-pvtx9 container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 02:42:44.825 E ns/openshift-marketplace pod/community-operators-9567c47bd-2thsq node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus1-sfhqj container=community-operators container exited with code 2 (Error): 
Nov 07 02:54:02.547 E ns/openshift-marketplace pod/community-operators-7857bd4d58-rfnpn node/ci-op-pg7mtgss-09f59-ng62w-worker-centralus2-pvtx9 container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20191107-030030.xml

Find was mentions in log files


Show 55 Passed Tests

Show 173 Skipped Tests