ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2019-11-08 16:24
Elapsed1h49m
Work namespaceci-op-2lmkfic8
pod4.3.0-0.ci-2019-11-08-162301-gcp-serial

Test Failures


openshift-tests Monitor cluster while tests execute 54m37s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
31 error level events were detected during this test run:

Nov 08 17:09:54.882 E ns/openshift-machine-config-operator pod/machine-config-daemon-kcshg node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 17:09:54.934 E ns/openshift-ingress pod/router-default-577fd7d8-dqzc8 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:00:08.530Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:00:13.535Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:00:20.244Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:00:25.237Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:00:30.239Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:00:40.443Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:00:45.437Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:00:51.871Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:00:56.865Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:01:13.313Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:01:19.595Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 17:13:10.413 E ns/openshift-machine-config-operator pod/machine-config-daemon-8vs44 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 17:15:20.633 E ns/openshift-machine-config-operator pod/machine-config-daemon-96ddd node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 17:16:30.907 E ns/openshift-image-registry pod/node-ca-645s9 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 17:38:13.591 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 17:38:22.823 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 17:38:27.666 E ns/openshift-monitoring pod/kube-state-metrics-5d7df65455-8lxvr node/ci-op--b75zt-w-b-gcjg2.c.openshift-gce-devel-ci.internal container=kube-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 17:38:27.666 E ns/openshift-monitoring pod/kube-state-metrics-5d7df65455-8lxvr node/ci-op--b75zt-w-b-gcjg2.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 17:38:27.666 E ns/openshift-monitoring pod/kube-state-metrics-5d7df65455-8lxvr node/ci-op--b75zt-w-b-gcjg2.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 17:38:27.689 E ns/openshift-marketplace pod/community-operators-58b8ccc7d5-qnmxv node/ci-op--b75zt-w-b-gcjg2.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 17:38:27.784 E ns/openshift-monitoring pod/prometheus-adapter-574c77fb44-cgbh6 node/ci-op--b75zt-w-b-gcjg2.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I1108 17:00:05.821849       1 adapter.go:93] successfully using in-cluster auth\nI1108 17:00:06.280106       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 17:38:27.945 E ns/openshift-ingress pod/router-default-577fd7d8-qncsk node/ci-op--b75zt-w-b-gcjg2.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:28:21.087Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:28:26.068Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:28:31.075Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:28:46.743Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:29:05.638Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:37:45.146Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:37:52.604Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:06.530Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:11.538Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:16.528Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:21.533Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 17:38:29.009 E ns/openshift-marketplace pod/certified-operators-84fb688b7b-5s5x8 node/ci-op--b75zt-w-b-gcjg2.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 17:39:01.885 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T17:38:54.113Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T17:38:54.116Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T17:38:54.117Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T17:38:54.118Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T17:38:54.118Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T17:38:54.120Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T17:38:54.120Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 17:39:03.385 E ns/openshift-ingress pod/router-default-577fd7d8-2pmfv node/ci-op--b75zt-w-d-fn2tn.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:06.534Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:11.532Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:16.538Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:21.536Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:26.533Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:33.778Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:38.815Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:43.926Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:48.785Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:53.785Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T17:38:58.782Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 17:39:03.447 E ns/openshift-monitoring pod/thanos-querier-5f4d6bb4f9-75jws node/ci-op--b75zt-w-d-fn2tn.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2019/11/08 17:00:03 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 17:00:03 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 17:00:03 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 17:00:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 17:00:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 17:00:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 17:00:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 17:00:03 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 17:00:03 http.go:96: HTTPS: listening on [::]:9091\n
Nov 08 17:39:03.493 E ns/openshift-monitoring pod/telemeter-client-55f95d685d-wcgv2 node/ci-op--b75zt-w-d-fn2tn.c.openshift-gce-devel-ci.internal container=telemeter-client container exited with code 2 (Error): 
Nov 08 17:39:03.493 E ns/openshift-monitoring pod/telemeter-client-55f95d685d-wcgv2 node/ci-op--b75zt-w-d-fn2tn.c.openshift-gce-devel-ci.internal container=reload container exited with code 2 (Error): 
Nov 08 17:39:19.277 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--b75zt-w-b-l6xbf.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T17:39:16.252Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T17:39:16.255Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T17:39:16.255Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T17:39:16.256Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T17:39:16.256Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T17:39:16.256Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T17:39:16.256Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T17:39:16.256Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T17:39:16.256Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T17:39:16.256Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T17:39:16.256Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T17:39:16.256Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T17:39:16.256Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T17:39:16.256Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T17:39:16.258Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T17:39:16.258Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 18:00:42.267 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 18:00:42.267 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 18:00:42.267 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 18:00:42.304 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T17:38:54.113Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T17:38:54.116Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T17:38:54.117Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T17:38:54.118Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T17:38:54.118Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T17:38:54.118Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T17:38:54.120Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T17:38:54.120Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 18:00:42.304 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=prometheus-config-reloader container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-monitoring_prometheus-k8s-1_ec67c7d1-6c78-426d-913f-a8a5d1de8b4b/prometheus-config-reloader/0.log": lstat /var/log/pods/openshift-monitoring_prometheus-k8s-1_ec67c7d1-6c78-426d-913f-a8a5d1de8b4b/prometheus-config-reloader/0.log: no such file or directory
Nov 08 18:00:42.337 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 18:00:42.337 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 18:00:42.337 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 18:00:42.337 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 18:00:42.337 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b75zt-w-c-7mlb8.c.openshift-gce-devel-ci.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 18:01:00.416 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--b75zt-w-d-wjxxk.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T18:00:57.052Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T18:00:57.057Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T18:00:57.057Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T18:00:57.059Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T18:00:57.059Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T18:00:57.059Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T18:00:57.059Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T18:00:57.059Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T18:00:57.059Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T18:00:57.059Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T18:00:57.059Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T18:00:57.059Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T18:00:57.059Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T18:00:57.059Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T18:00:57.061Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T18:00:57.061Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08

				
				Click to see stdout/stderrfrom junit_e2e_20191108-180254.xml

Find was mentions in log files


Show 57 Passed Tests

Show 8 Skipped Tests