ResultSUCCESS
Tests 1 failed / 55 succeeded
Started2019-11-07 13:17
Elapsed2h20m
Work namespaceci-op-bmi38l4k
pod4.3.0-0.nightly-2019-11-07-131058-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h18m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
27 error level events were detected during this test run:

Nov 07 14:02:11.357 E ns/openshift-monitoring pod/openshift-state-metrics-6c78647cc7-nnh5z node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=openshift-state-metrics container exited with code 2 (Error): 
Nov 07 14:02:11.408 E ns/openshift-marketplace pod/community-operators-7c9fcfbf56-tv92b node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=community-operators container exited with code 2 (Error): 
Nov 07 14:02:11.440 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=config-reloader container exited with code 2 (Error): 2019/11/07 13:55:30 Watching directory: "/etc/alertmanager/config"\n
Nov 07 14:02:11.440 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 13:55:30 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 13:55:30 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 13:55:30 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 13:55:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 13:55:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 13:55:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 13:55:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 13:55:30 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 14:02:11.496 E ns/openshift-marketplace pod/redhat-operators-5d79566894-vrcfw node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=redhat-operators container exited with code 2 (Error): 
Nov 07 14:02:11.569 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-kw72n node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=kube-state-metrics container exited with code 2 (Error): 
Nov 07 14:07:13.043 E ns/openshift-image-registry pod/node-ca-b24b8 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 14:22:16.599 E ns/openshift-authentication pod/oauth-openshift-c5ff5b4b6-66jhf node/ci-op-bmi38l4k-09f59-f2jqv-master-1 container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 14:22:53.815 E ns/openshift-machine-config-operator pod/machine-config-daemon-zbw6k node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 14:22:53.815 E ns/openshift-machine-config-operator pod/machine-config-daemon-zbw6k node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 14:31:26.598 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/07 13:55:50 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 07 14:31:26.598 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=prometheus-proxy container exited with code 2 (Error): 2019/11/07 13:55:55 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/07 13:55:55 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 13:55:55 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 13:55:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/07 13:55:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 13:55:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/07 13:55:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 13:55:55 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/07 13:55:55 http.go:96: HTTPS: listening on [::]:9091\n
Nov 07 14:31:26.598 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-07T13:55:49.904421081Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-07T13:55:49.904512081Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-07T13:55:49.905713087Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-07T13:55:54.906734688Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-07T13:55:59.905760631Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-07T13:56:05.057492672Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-07T13:57:05.972282006Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 07 14:31:26.672 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 14:31:26.672 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 14:31:26.672 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=config-reloader container exited with code 2 (Error): 2019/11/07 13:56:33 Watching directory: "/etc/alertmanager/config"\n
Nov 07 14:31:26.672 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 13:56:34 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 13:56:34 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 13:56:34 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 13:56:34 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 13:56:34 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 13:56:34 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 13:56:34 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 13:56:34 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 14:31:26.689 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=config-reloader container exited with code 2 (Error): 2019/11/07 14:02:28 Watching directory: "/etc/alertmanager/config"\n
Nov 07 14:31:26.689 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 14:02:28 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 14:02:28 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 14:02:28 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 14:02:28 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 14:02:28 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 14:02:28 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 14:02:28 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 14:02:28 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 14:31:26.825 E ns/openshift-monitoring pod/grafana-85bf74556f-tj9jv node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 14:31:26.825 E ns/openshift-monitoring pod/grafana-85bf74556f-tj9jv node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 14:31:26.993 E ns/openshift-monitoring pod/prometheus-adapter-756646dcd8-vjjfc node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-ct8sj container=prometheus-adapter container exited with code 2 (Error): I1107 13:54:09.505840       1 adapter.go:93] successfully using in-cluster auth\nI1107 13:54:10.081379       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 07 14:33:30.948 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-kdffw container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-07T14:33:04.313Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-07T14:33:04.318Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-07T14:33:04.318Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-07T14:33:04.320Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-07T14:33:04.321Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-07T14:33:04.321Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-07T14:33:04.321Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-07T14:33:04.321Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-07T14:33:04.321Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-07T14:33:04.321Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-07T14:33:04.321Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-07T14:33:04.321Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-07T14:33:04.321Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-07T14:33:04.327Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-07T14:33:04.327Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=info ts=2019-11-07T14:33:04.327Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=error ts=2019-11-07
Nov 07 14:35:11.210 E ns/openshift-machine-config-operator pod/machine-config-daemon-8rdd9 node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=oauth-proxy container exited with code 143 (Error): 
Nov 07 14:35:11.256 E ns/openshift-dns pod/dns-default-djp4j node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 14:35:11.256 E ns/openshift-dns pod/dns-default-djp4j node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 15:01:24.492 E ns/openshift-machine-config-operator pod/machine-config-daemon-rvqvq node/ci-op-bmi38l4k-09f59-f2jqv-worker-westus-nhgwx container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20191107-151820.xml

Find was mentions in log files


Show 55 Passed Tests

Show 173 Skipped Tests