ResultSUCCESS
Tests 1 failed / 54 succeeded
Started2019-11-01 21:56
Elapsed2h0m
Work namespaceci-op-iq0wt97h
pod4.3.0-0.nightly-2019-11-01-215341-openstack-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h3m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
37 error level events were detected during this test run:

Nov 01 22:47:44.432 E ns/openshift-authentication pod/oauth-openshift-84f457db85-m52sn node/iq0wt97h-9f2ed-qwpsj-master-2 container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 23:05:03.039 E ns/openshift-image-registry pod/node-ca-c8l4q node/iq0wt97h-9f2ed-qwpsj-worker-5x77g container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 23:10:36.416 E ns/openshift-ingress pod/router-default-894fc77b4-m9vsg node/iq0wt97h-9f2ed-qwpsj-worker-5mg2p container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 23:10:36.448 E ns/openshift-machine-config-operator pod/machine-config-daemon-rk66b node/iq0wt97h-9f2ed-qwpsj-worker-5mg2p container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 23:10:36.448 E ns/openshift-machine-config-operator pod/machine-config-daemon-rk66b node/iq0wt97h-9f2ed-qwpsj-worker-5mg2p container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 23:21:07.586 E ns/openshift-ingress pod/router-default-894fc77b4-dg5mz node/iq0wt97h-9f2ed-qwpsj-worker-5mg2p container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 23:21:08.508 E ns/openshift-machine-config-operator pod/machine-config-daemon-lg2qp node/iq0wt97h-9f2ed-qwpsj-worker-5mg2p container=oauth-proxy container exited with code 143 (Error): 
Nov 01 23:33:37.658 E ns/openshift-marketplace pod/certified-operators-684d66c97d-46cfx node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=certified-operators container exited with code 2 (Error): 
Nov 01 23:33:37.678 E ns/openshift-marketplace pod/community-operators-d94cf7874-xwglp node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=community-operators container exited with code 2 (Error): 
Nov 01 23:33:38.669 E ns/openshift-monitoring pod/prometheus-k8s-0 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/01 22:36:43 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 01 23:33:38.669 E ns/openshift-monitoring pod/prometheus-k8s-0 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=prometheus-proxy container exited with code 2 (Error): 2019/11/01 22:36:44 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 22:36:44 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 22:36:44 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 22:36:44 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/01 22:36:44 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 22:36:44 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 22:36:44 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 22:36:44 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 22:36:44 http.go:96: HTTPS: listening on [::]:9091\n
Nov 01 23:33:38.669 E ns/openshift-monitoring pod/prometheus-k8s-0 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-01T22:36:42.401126441Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-01T22:36:42.401354813Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-01T22:36:42.404287475Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-01T22:36:47.5512892Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-01T22:38:12.802636393Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 01 23:33:38.693 E ns/openshift-monitoring pod/prometheus-k8s-1 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T22:36:39.427Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T22:36:39.433Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T22:36:39.438Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T22:36:39.439Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T22:36:39.439Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T22:36:39.439Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T22:36:39.439Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T22:36:39.439Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T22:36:39.439Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T22:36:39.439Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T22:36:39.439Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T22:36:39.439Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T22:36:39.439Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T22:36:39.439Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T22:36:39.446Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T22:36:39.446Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 23:33:38.693 E ns/openshift-monitoring pod/prometheus-k8s-1 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/01 22:36:43 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 01 23:33:38.693 E ns/openshift-monitoring pod/prometheus-k8s-1 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=prometheus-proxy container exited with code 2 (Error): :44 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 22:36:44 http.go:96: HTTPS: listening on [::]:9091\n2019/11/01 22:37:13 oauthproxy.go:774: basicauth: 10.131.0.14:49326 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 22:41:44 oauthproxy.go:774: basicauth: 10.131.0.14:56852 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 22:46:14 oauthproxy.go:774: basicauth: 10.131.0.14:36736 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 22:50:44 oauthproxy.go:774: basicauth: 10.131.0.14:43820 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 22:55:14 oauthproxy.go:774: basicauth: 10.131.0.14:50900 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 22:59:44 oauthproxy.go:774: basicauth: 10.131.0.14:57790 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 23:04:15 oauthproxy.go:774: basicauth: 10.131.0.14:36444 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 23:08:45 oauthproxy.go:774: basicauth: 10.131.0.14:43328 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 23:13:15 oauthproxy.go:774: basicauth: 10.131.0.14:50282 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 23:17:45 oauthproxy.go:774: basicauth: 10.131.0.14:57196 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 23:22:15 oauthproxy.go:774: basicauth: 10.131.0.14:35800 Authorization header does not start with 'Basic', skipping basic authentication\n201
Nov 01 23:33:38.693 E ns/openshift-monitoring pod/prometheus-k8s-1 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-01T22:36:42.401588329Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-01T22:36:42.401755211Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-01T22:36:42.404314075Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-01T22:36:47.555728183Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-01T22:38:04.904388902Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 01 23:33:38.738 E ns/openshift-monitoring pod/prometheus-k8s-1 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 23:33:38.738 E ns/openshift-monitoring pod/prometheus-k8s-1 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 23:33:38.789 E ns/openshift-monitoring pod/alertmanager-main-0 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=config-reloader container exited with code 2 (Error): 2019/11/01 22:36:59 Watching directory: "/etc/alertmanager/config"\n
Nov 01 23:33:38.789 E ns/openshift-monitoring pod/alertmanager-main-0 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 22:36:59 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 22:36:59 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 22:36:59 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 22:36:59 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 22:36:59 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 22:36:59 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 22:36:59 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 22:36:59 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 23:33:38.846 E ns/openshift-monitoring pod/alertmanager-main-1 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=config-reloader container exited with code 2 (Error): 2019/11/01 22:36:44 Watching directory: "/etc/alertmanager/config"\n
Nov 01 23:33:38.846 E ns/openshift-monitoring pod/alertmanager-main-1 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 22:36:45 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 22:36:45 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 22:36:45 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 22:36:45 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 22:36:45 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 22:36:45 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 22:36:45 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 22:36:45 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 23:33:38.911 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-pjq85 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=kube-state-metrics container exited with code 2 (Error): 
Nov 01 23:33:38.943 E ns/openshift-marketplace pod/redhat-operators-f7bd46cd7-xjjsr node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=redhat-operators container exited with code 2 (Error): 
Nov 01 23:33:38.982 E ns/openshift-monitoring pod/grafana-69b9b6556-tnh8s node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=grafana-proxy container exited with code 2 (Error): 
Nov 01 23:33:39.026 E ns/openshift-monitoring pod/telemeter-client-59f464c556-pskdw node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=reload container exited with code 2 (Error): 
Nov 01 23:33:39.026 E ns/openshift-monitoring pod/telemeter-client-59f464c556-pskdw node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=telemeter-client container exited with code 2 (Error): 
Nov 01 23:33:39.041 E ns/openshift-ingress pod/router-default-894fc77b4-n7cpn node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:22:24.795Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:22:32.909Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:22:37.906Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:22:42.906Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:25:38.659Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:25:43.642Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:25:48.644Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:25:55.773Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:33:21.498Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:33:27.868Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T23:33:34.004Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 01 23:33:39.064 E ns/openshift-monitoring pod/openshift-state-metrics-674774b7c4-86jbj node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=openshift-state-metrics container exited with code 2 (Error): 
Nov 01 23:33:39.085 E ns/openshift-monitoring pod/prometheus-adapter-d9bcb6d44-kr26z node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=prometheus-adapter container exited with code 2 (Error): I1101 22:35:13.714001       1 adapter.go:93] successfully using in-cluster auth\nI1101 22:35:14.094816       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 23:33:39.107 E ns/openshift-monitoring pod/thanos-querier-765cc79849-zs58c node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=oauth-proxy container exited with code 2 (Error): 2019/11/01 22:36:30 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/01 22:36:30 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 22:36:30 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 22:36:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/01 22:36:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 22:36:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/01 22:36:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 22:36:30 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 22:36:30 http.go:96: HTTPS: listening on [::]:9091\n
Nov 01 23:33:39.127 E ns/openshift-monitoring pod/prometheus-adapter-d9bcb6d44-79vqn node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=prometheus-adapter container exited with code 2 (Error): I1101 22:35:13.832896       1 adapter.go:93] successfully using in-cluster auth\nI1101 22:35:14.682069       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 23:33:39.145 E ns/openshift-monitoring pod/alertmanager-main-2 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=config-reloader container exited with code 2 (Error): 2019/11/01 22:36:28 Watching directory: "/etc/alertmanager/config"\n
Nov 01 23:33:39.145 E ns/openshift-monitoring pod/alertmanager-main-2 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 22:36:30 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 22:36:30 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 22:36:30 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 22:36:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 22:36:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 22:36:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 22:36:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 22:36:30 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 23:33:39.742 E ns/openshift-monitoring pod/thanos-querier-765cc79849-kr967 node/iq0wt97h-9f2ed-qwpsj-worker-2zd6h container=oauth-proxy container exited with code 2 (Error): 2019/11/01 22:36:30 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/01 22:36:30 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 22:36:30 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 22:36:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/01 22:36:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 22:36:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/01 22:36:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 22:36:30 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 22:36:30 http.go:96: HTTPS: listening on [::]:9091\n
Nov 01 23:34:34.358 E ns/openshift-monitoring pod/prometheus-k8s-0 node/iq0wt97h-9f2ed-qwpsj-worker-5mg2p container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T23:34:25.726Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T23:34:25.732Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T23:34:25.732Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T23:34:25.735Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T23:34:25.735Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T23:34:25.735Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T23:34:25.735Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T23:34:25.735Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T23:34:25.735Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T23:34:25.735Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T23:34:25.735Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T23:34:25.735Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T23:34:25.735Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T23:34:25.735Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T23:34:25.738Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T23:34:25.738Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 23:34:53.057 E ns/openshift-monitoring pod/prometheus-k8s-1 node/iq0wt97h-9f2ed-qwpsj-worker-xtc7z container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T23:34:29.735Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T23:34:29.739Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T23:34:29.740Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T23:34:29.741Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T23:34:29.741Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T23:34:29.741Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T23:34:29.741Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T23:34:29.741Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T23:34:29.741Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T23:34:29.741Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T23:34:29.741Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T23:34:29.741Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T23:34:29.741Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T23:34:29.741Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T23:34:29.878Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T23:34:29.878Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01

				
				Click to see stdout/stderrfrom junit_e2e_20191101-234535.xml

Find was mentions in log files


Show 54 Passed Tests

Show 165 Skipped Tests