ResultSUCCESS
Tests 1 failed / 55 succeeded
Started2019-11-08 06:07
Elapsed2h17m
Work namespaceci-op-kt9dnp8y
pod4.3.0-0.nightly-2019-11-08-060454-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h5m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
36 error level events were detected during this test run:

Nov 08 07:00:49.271 E ns/openshift-ingress pod/router-default-77869dc64-6vmqk node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus2-g4bpb container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:43:49.372Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:44:14.407Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:45:01.687Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:45:06.666Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:45:37.592Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:45:42.553Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:46:56.111Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:47:01.101Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:55:16.573Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:55:21.540Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:00:44.261Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 07:00:49.332 E ns/openshift-marketplace pod/redhat-operators-5b6fcf68b6-tk8h4 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus2-g4bpb container=redhat-operators container exited with code 2 (OOMKilled): 
Nov 08 07:00:49.351 E ns/openshift-machine-config-operator pod/machine-config-daemon-xjh9x node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus2-g4bpb container=oauth-proxy container exited with code 143 (Error): 
Nov 08 07:00:50.328 E ns/openshift-monitoring pod/prometheus-adapter-5875d9f4cb-n8kz9 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus2-g4bpb container=prometheus-adapter container exited with code 2 (Error): I1108 06:40:27.659111       1 adapter.go:93] successfully using in-cluster auth\nI1108 06:40:28.107491       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 07:00:50.374 E ns/openshift-monitoring pod/openshift-state-metrics-6c78647cc7-k978n node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus2-g4bpb container=openshift-state-metrics container exited with code 2 (Error): 
Nov 08 07:00:50.395 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-w85mv node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus2-g4bpb container=kube-state-metrics container exited with code 2 (Error): 
Nov 08 07:00:50.416 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus2-g4bpb container=config-reloader container exited with code 2 (Error): 2019/11/08 06:42:17 Watching directory: "/etc/alertmanager/config"\n
Nov 08 07:00:50.416 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus2-g4bpb container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 06:42:17 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 06:42:17 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 06:42:17 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 06:42:17 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 06:42:17 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 06:42:17 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 06:42:17 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 06:42:17 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 07:09:41.499 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 07:10:02.156 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-vd2sp container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T06:42:38.936Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T06:42:38.941Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T06:42:38.941Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T06:42:38.942Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T06:42:38.942Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T06:42:38.943Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T06:42:38.943Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T06:42:38.943Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T06:42:38.943Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T06:42:38.943Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T06:42:38.943Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T06:42:38.943Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T06:42:38.943Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T06:42:38.943Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T06:42:38.947Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T06:42:38.947Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 07:10:02.156 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-vd2sp container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 06:42:42 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2019/11/08 06:59:40 config map updated\n2019/11/08 06:59:41 successfully triggered reload\n
Nov 08 07:10:02.156 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-vd2sp container=prometheus-proxy container exited with code 2 (Error): 2019/11/08 06:42:48 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 06:42:48 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 06:42:48 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 06:42:48 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 06:42:48 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 06:42:48 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 06:42:48 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 06:42:48 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 06:42:48 http.go:96: HTTPS: listening on [::]:9091\n2019/11/08 06:43:23 oauthproxy.go:774: basicauth: 10.128.2.4:48236 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 06:47:54 oauthproxy.go:774: basicauth: 10.128.2.4:49756 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 06:52:24 oauthproxy.go:774: basicauth: 10.128.2.4:50996 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 06:56:55 oauthproxy.go:774: basicauth: 10.128.2.4:52244 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 07:01:25 oauthproxy.go:774: basicauth: 10.128.2.4:53724 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11
Nov 08 07:10:02.156 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-vd2sp container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-08T06:42:42.3178551Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-08T06:42:42.318028601Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-08T06:42:42.319491809Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T06:42:47.321466868Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T06:42:52.319511806Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T06:42:57.510217997Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-08T06:44:22.906124083Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 08 07:10:02.211 E ns/openshift-marketplace pod/community-operators-576bcb7d94-s6vls node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-vd2sp container=community-operators container exited with code 2 (Error): 
Nov 08 07:10:03.278 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-vd2sp container=config-reloader container exited with code 2 (Error): 2019/11/08 06:43:33 Watching directory: "/etc/alertmanager/config"\n
Nov 08 07:10:03.310 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-vd2sp container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:10:03.420 E ns/openshift-ingress pod/router-default-77869dc64-jrfpx node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-vd2sp container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:10:15.968 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus1-k9nzv container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:10:15.968 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus1-k9nzv container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:10:15.968 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus1-k9nzv container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:10:15.968 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus1-k9nzv container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:10:15.968 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus1-k9nzv container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:10:15.968 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus1-k9nzv container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:10:15.968 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus1-k9nzv container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:11:19.885 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T07:11:00.013Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T07:11:00.032Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T07:11:00.033Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T07:11:00.035Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T07:11:00.035Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T07:11:00.035Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T07:11:00.035Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T07:11:00.035Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T07:11:00.035Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T07:11:00.035Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T07:11:00.035Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T07:11:00.035Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T07:11:00.035Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T07:11:00.035Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T07:11:00.040Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T07:11:00.040Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 07:19:04.121 E ns/openshift-ingress pod/router-default-77869dc64-5djdl node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=router container exited with code 2 (Error): outer.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:10:59.878Z	INFO	router.router	router/router.go:257	router is including routes in all namespaces\n2019-11-08T07:11:00.134Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:11:09.185Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:11:14.206Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:11:21.798Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:11:26.713Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:11:31.963Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:11:37.191Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:11:46.153Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:14:09.278Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T07:19:00.801Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 07:19:04.158 E ns/openshift-monitoring pod/prometheus-adapter-5875d9f4cb-6fstv node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=prometheus-adapter container exited with code 2 (Error): I1108 07:10:45.662461       1 adapter.go:93] successfully using in-cluster auth\nI1108 07:10:47.097361       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 07:19:04.264 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 07:11:10 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 08 07:19:04.264 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=prometheus-proxy container exited with code 2 (Error): 2019/11/08 07:11:16 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 07:11:16 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 07:11:16 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 07:11:16 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 07:11:16 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 07:11:16 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 07:11:16 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 07:11:16 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 07:11:16 http.go:96: HTTPS: listening on [::]:9091\n
Nov 08 07:19:04.264 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-08T07:11:06.77656224Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-08T07:11:06.77680286Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-08T07:11:06.779066748Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T07:11:11.778695051Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T07:11:16.78632386Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T07:11:22.182188368Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 08 07:19:44.616 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus2-g4bpb container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T07:19:31.481Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T07:19:31.488Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T07:19:31.488Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T07:19:31.489Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T07:19:31.490Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T07:19:31.490Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T07:19:31.490Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T07:19:31.490Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T07:19:31.490Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T07:19:31.490Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T07:19:31.490Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T07:19:31.490Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T07:19:31.490Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T07:19:31.490Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T07:19:31.495Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T07:19:31.495Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 07:33:54.508 E ns/openshift-machine-config-operator pod/machine-config-daemon-92d9q node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=oauth-proxy container exited with code 143 (Error): 
Nov 08 07:33:55.599 E ns/openshift-dns pod/dns-default-6bqgb node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:33:55.599 E ns/openshift-dns pod/dns-default-6bqgb node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:35:20.897 E ns/openshift-dns pod/dns-default-q2t2b node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:35:20.897 E ns/openshift-dns pod/dns-default-q2t2b node/ci-op-kt9dnp8y-09f59-s7rd6-worker-centralus3-wdwxv container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20191108-080443.xml

Find was mentions in log files


Show 55 Passed Tests

Show 173 Skipped Tests