ResultSUCCESS
Tests 1 failed / 54 succeeded
Started2019-11-08 09:49
Elapsed2h5m
Work namespaceci-op-t7q3iphx
pod4.3.0-0.nightly-2019-11-08-094604-openstack-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h14m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
35 error level events were detected during this test run:

Nov 08 10:30:03.207 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Nov 08 10:42:58.428 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 10:45:10.861 E ns/openshift-machine-config-operator pod/machine-config-daemon-cq6nb node/t7q3iphx-9f2ed-mvpdr-worker-6z4cc container=oauth-proxy container exited with code 143 (Error): 
Nov 08 11:32:46.115 E ns/openshift-monitoring pod/prometheus-adapter-7567645444-wsmgq node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=prometheus-adapter container exited with code 2 (Error): I1108 10:20:03.779204       1 adapter.go:93] successfully using in-cluster auth\nI1108 10:20:04.898797       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 11:32:46.133 E ns/openshift-monitoring pod/alertmanager-main-0 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=config-reloader container exited with code 2 (Error): 2019/11/08 10:24:03 Watching directory: "/etc/alertmanager/config"\n
Nov 08 11:32:46.133 E ns/openshift-monitoring pod/alertmanager-main-0 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 10:24:04 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 10:24:04 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 10:24:04 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 10:24:04 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 10:24:04 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 10:24:04 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 10:24:04 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 10:24:04 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 11:32:46.158 E ns/openshift-monitoring pod/prometheus-adapter-7567645444-vh9dv node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=prometheus-adapter container exited with code 2 (Error): I1108 10:20:02.849836       1 adapter.go:93] successfully using in-cluster auth\nI1108 10:20:03.620063       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 11:32:47.123 E ns/openshift-marketplace pod/certified-operators-68cb7cf445-4c9wp node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=certified-operators container exited with code 2 (Error): 
Nov 08 11:32:47.151 E ns/openshift-monitoring pod/alertmanager-main-2 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=config-reloader container exited with code 2 (Error): 2019/11/08 10:22:05 Watching directory: "/etc/alertmanager/config"\n
Nov 08 11:32:47.151 E ns/openshift-monitoring pod/alertmanager-main-2 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 10:22:07 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 10:22:07 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 10:22:07 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 10:22:07 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 10:22:07 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 10:22:07 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 10:22:07 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 10:22:07 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 11:32:47.169 E ns/openshift-monitoring pod/openshift-state-metrics-6c78647cc7-2drxm node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=openshift-state-metrics container exited with code 2 (Error): 
Nov 08 11:32:47.186 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-vh2rr node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=kube-state-metrics container exited with code 2 (Error): 
Nov 08 11:32:47.201 E ns/openshift-marketplace pod/redhat-operators-c548d89db-rwdmz node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=redhat-operators container exited with code 2 (Error): 
Nov 08 11:32:47.222 E ns/openshift-monitoring pod/thanos-querier-584b7854c-swbdn node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=oauth-proxy container exited with code 2 (Error): 2019/11/08 10:21:27 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 10:21:27 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 10:21:27 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 10:21:27 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 10:21:27 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 10:21:27 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 10:21:27 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 10:21:27 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 10:21:27 http.go:96: HTTPS: listening on [::]:9091\n
Nov 08 11:32:47.238 E ns/openshift-marketplace pod/community-operators-5d6cd5c4b-5nb9n node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=community-operators container exited with code 2 (Error): 
Nov 08 11:32:47.285 E ns/openshift-ingress pod/router-default-694c548b55-2c9d6 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:06:35.481Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:06:40.477Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:07:08.109Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:07:31.334Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:08:45.768Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:09:18.672Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:16:05.376Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:16:10.373Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:32:25.495Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:32:30.490Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T11:32:39.036Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 11:32:47.303 E ns/openshift-monitoring pod/grafana-85bf74556f-f7w6g node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=grafana-proxy container exited with code 2 (Error): 
Nov 08 11:32:48.135 E ns/openshift-monitoring pod/prometheus-k8s-0 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 10:22:28 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2019/11/08 10:22:42 config map updated\n2019/11/08 10:22:42 error: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused\n
Nov 08 11:32:48.135 E ns/openshift-monitoring pod/prometheus-k8s-0 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=prometheus-proxy container exited with code 2 (Error): 2019/11/08 10:22:32 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 10:22:32 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 10:22:32 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 10:22:32 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 10:22:32 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 10:22:32 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 10:22:32 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 10:22:32 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 10:22:32 http.go:96: HTTPS: listening on [::]:9091\n
Nov 08 11:32:48.135 E ns/openshift-monitoring pod/prometheus-k8s-0 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=prometheus-config-reloader container exited with code 2 (Error): l.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:22:36.908066415Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:22:41.9082726Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:22:46.908245653Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:22:51.908110531Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:22:56.907676223Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T10:23:02.083369069Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-08T10:23:02.229711247Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheu
Nov 08 11:32:48.154 E ns/openshift-monitoring pod/alertmanager-main-1 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=config-reloader container exited with code 2 (Error): 2019/11/08 10:23:01 Watching directory: "/etc/alertmanager/config"\n
Nov 08 11:32:48.154 E ns/openshift-monitoring pod/alertmanager-main-1 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 10:23:01 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 10:23:01 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 10:23:01 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 10:23:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 10:23:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 10:23:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 10:23:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 10:23:01 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 11:32:48.184 E ns/openshift-monitoring pod/prometheus-k8s-1 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 10:22:28 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2019/11/08 10:22:42 config map updated\n2019/11/08 10:22:42 error: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused\n
Nov 08 11:32:48.184 E ns/openshift-monitoring pod/prometheus-k8s-1 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=prometheus-proxy container exited with code 2 (Error): hproxy.go:774: basicauth: 10.131.0.18:38934 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 10:36:51 oauthproxy.go:774: basicauth: 10.131.0.18:45570 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 10:41:21 oauthproxy.go:774: basicauth: 10.131.0.18:52210 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 10:45:51 oauthproxy.go:774: basicauth: 10.131.0.18:58880 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 10:50:21 oauthproxy.go:774: basicauth: 10.131.0.18:37320 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 10:54:51 oauthproxy.go:774: basicauth: 10.131.0.18:43864 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 10:59:21 oauthproxy.go:774: basicauth: 10.131.0.18:50420 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 11:03:52 oauthproxy.go:774: basicauth: 10.131.0.18:56910 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 11:08:22 oauthproxy.go:774: basicauth: 10.131.0.18:35346 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 11:12:52 oauthproxy.go:774: basicauth: 10.131.0.18:41946 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 11:17:22 oauthproxy.go:774: basicauth: 10.131.0.18:48548 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 11:21:52 oauthproxy.go:774: basicauth: 10.131.0.18:55140 Authorization header does not start with 'Basic', skipping basic authentication\n201
Nov 08 11:32:48.184 E ns/openshift-monitoring pod/prometheus-k8s-1 node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=prometheus-config-reloader container exited with code 2 (Error): l.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:22:36.913557802Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:22:41.913002417Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:22:46.913433641Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:22:51.913222129Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T10:22:56.913642342Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T10:23:02.06751595Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-08T10:23:02.22357299Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheu
Nov 08 11:32:48.198 E ns/openshift-monitoring pod/telemeter-client-7897889495-j27nv node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=reload container exited with code 2 (Error): 
Nov 08 11:32:48.198 E ns/openshift-monitoring pod/telemeter-client-7897889495-j27nv node/t7q3iphx-9f2ed-mvpdr-worker-6jv8f container=telemeter-client container exited with code 2 (Error): 
Nov 08 11:33:21.158 E ns/openshift-monitoring pod/prometheus-k8s-1 node/t7q3iphx-9f2ed-mvpdr-worker-wqdgw container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T11:33:14.141Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T11:33:14.145Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T11:33:14.145Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T11:33:14.147Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T11:33:14.147Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T11:33:14.147Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T11:33:14.147Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T11:33:14.147Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T11:33:14.147Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T11:33:14.147Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T11:33:14.147Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T11:33:14.147Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T11:33:14.147Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T11:33:14.147Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T11:33:14.151Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T11:33:14.151Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 11:33:39.275 E ns/openshift-monitoring pod/prometheus-k8s-0 node/t7q3iphx-9f2ed-mvpdr-worker-6z4cc container=prometheus container exited with code 1 (Error): caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T11:33:26.403Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T11:33:26.408Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T11:33:26.409Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T11:33:26.411Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T11:33:26.411Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T11:33:26.411Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T11:33:26.411Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T11:33:26.411Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T11:33:26.411Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T11:33:26.411Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T11:33:26.411Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T11:33:26.411Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T11:33:26.411Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T11:33:26.411Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T11:33:26.414Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T11:33:26.415Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 11:43:44.850 E ns/openshift-monitoring pod/prometheus-adapter-7567645444-pnwnh node/t7q3iphx-9f2ed-mvpdr-worker-wqdgw container=prometheus-adapter container exited with code 2 (Error): I1108 11:32:54.757651       1 adapter.go:93] successfully using in-cluster auth\nI1108 11:32:55.596658       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 11:43:44.869 E ns/openshift-machine-config-operator pod/machine-config-daemon-cjdfd node/t7q3iphx-9f2ed-mvpdr-worker-wqdgw container=oauth-proxy container exited with code 143 (Error): 
Nov 08 11:43:45.854 E ns/openshift-monitoring pod/thanos-querier-584b7854c-qpnc6 node/t7q3iphx-9f2ed-mvpdr-worker-wqdgw container=oauth-proxy container exited with code 2 (Error): 2019/11/08 11:32:55 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 11:32:55 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 11:32:55 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 11:32:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 11:32:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 11:32:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 11:32:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 11:32:55 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 11:32:55 http.go:96: HTTPS: listening on [::]:9091\n
Nov 08 11:43:45.871 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-rtbdb node/t7q3iphx-9f2ed-mvpdr-worker-wqdgw container=kube-state-metrics container exited with code 2 (Error): 
Nov 08 11:43:46.857 E ns/openshift-monitoring pod/alertmanager-main-2 node/t7q3iphx-9f2ed-mvpdr-worker-wqdgw container=config-reloader container exited with code 2 (Error): 2019/11/08 11:33:17 Watching directory: "/etc/alertmanager/config"\n
Nov 08 11:43:46.857 E ns/openshift-monitoring pod/alertmanager-main-2 node/t7q3iphx-9f2ed-mvpdr-worker-wqdgw container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 11:33:18 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 11:33:18 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 11:33:18 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 11:33:18 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 11:33:18 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 11:33:18 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 11:33:18 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 11:33:18 http.go:96: HTTPS: listening on [::]:9095\n

				
				Click to see stdout/stderrfrom junit_e2e_20191108-114348.xml

Filter through log files


Show 54 Passed Tests

Show 173 Skipped Tests