ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2019-11-06 06:50
Elapsed1h42m
Work namespaceci-op-0hwxc864
pod4.3.0-0.ci-2019-11-06-064742-gcp-serial

Test Failures


openshift-tests Monitor cluster while tests execute 55m57s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
37 error level events were detected during this test run:

Nov 06 07:30:21.324 E ns/openshift-marketplace pod/certified-operators-5977bd88c4-vfwwx node/ci-op--lqhk5-w-c-vwcs7.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Nov 06 07:30:22.434 E ns/openshift-marketplace pod/community-operators-5c4b6466b5-25t4w node/ci-op--lqhk5-w-c-vwcs7.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Nov 06 07:30:22.478 E ns/openshift-monitoring pod/openshift-state-metrics-6bc67587f-dn9fl node/ci-op--lqhk5-w-c-vwcs7.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 07:30:22.478 E ns/openshift-monitoring pod/openshift-state-metrics-6bc67587f-dn9fl node/ci-op--lqhk5-w-c-vwcs7.c.openshift-gce-devel-ci.internal container=openshift-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 07:30:22.478 E ns/openshift-monitoring pod/openshift-state-metrics-6bc67587f-dn9fl node/ci-op--lqhk5-w-c-vwcs7.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 07:36:05.229 E ns/openshift-machine-config-operator pod/machine-config-daemon-4ntrp node/ci-op--lqhk5-w-c-vwcs7.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 07:39:23.805 E ns/openshift-machine-config-operator pod/machine-config-daemon-ltjw9 node/ci-op--lqhk5-w-c-vwcs7.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 07:43:48.968 E ns/openshift-machine-config-operator pod/machine-config-daemon-6ztwr node/ci-op--lqhk5-w-c-vwcs7.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 07:47:57.646 E ns/openshift-machine-config-operator pod/machine-config-daemon-mdtzz node/ci-op--lqhk5-w-c-vwcs7.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 06 08:08:35.643 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 06 08:09:12.491 E ns/openshift-sdn pod/sdn-nl4wt node/ci-op--lqhk5-w-d-slpsq.c.openshift-gce-devel-ci.internal container=install-cni-plugins init container exited with code 1 (Error): 
Nov 06 08:09:25.995 E ns/openshift-monitoring pod/prometheus-adapter-5fd4cf6c57-x4zcb node/ci-op--lqhk5-w-b-n8dw4.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I1106 07:30:30.839498       1 adapter.go:93] successfully using in-cluster auth\nI1106 07:30:31.519549       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 06 08:09:26.021 E ns/openshift-monitoring pod/thanos-querier-64894dfc6c-wmzsb node/ci-op--lqhk5-w-b-n8dw4.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2019/11/06 07:15:57 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/06 07:15:57 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 07:15:57 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 07:15:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/06 07:15:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 07:15:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/06 07:15:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 07:15:57 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/06 07:15:57 http.go:96: HTTPS: listening on [::]:9091\n
Nov 06 08:09:27.072 E ns/openshift-ingress pod/router-default-7ff9769f68-gggzl node/ci-op--lqhk5-w-b-n8dw4.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T07:44:04.091Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T07:47:56.751Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T07:49:03.838Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T07:49:30.207Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T07:49:58.889Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T07:56:22.026Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:09:03.824Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:09:08.826Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:09:13.826Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:09:19.570Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:09:24.563Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 06 08:10:00.227 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--lqhk5-w-c-vwcs7.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-06T08:09:50.170Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-06T08:09:50.177Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-06T08:09:50.178Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-06T08:09:50.179Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-06T08:09:50.179Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-06T08:09:50.179Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-06T08:09:50.179Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-06T08:09:50.179Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-06T08:09:50.179Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-06T08:09:50.179Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-06T08:09:50.180Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-06T08:09:50.180Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-06T08:09:50.180Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-06T08:09:50.180Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-06T08:09:50.183Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-06T08:09:50.183Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-06
Nov 06 08:10:02.661 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-06T07:16:07.812Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-06T07:16:07.815Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-06T07:16:07.816Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-06T07:16:07.817Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-06T07:16:07.817Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-06T07:16:07.817Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-06T07:16:07.817Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-06T07:16:07.817Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-06T07:16:07.817Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-06T07:16:07.817Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-06T07:16:07.817Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-06T07:16:07.817Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-06T07:16:07.817Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-06T07:16:07.817Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-06T07:16:07.819Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-06T07:16:07.819Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-06
Nov 06 08:10:02.661 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-06T07:16:09.570384314Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2019-11-06T07:16:09.570525172Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-06T07:16:09.574255711Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-06T07:16:14.681937329Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-06T07:17:29.593923161Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 06 08:10:02.679 E ns/openshift-monitoring pod/telemeter-client-587b5884dd-xhn9n node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=reload container exited with code 2 (Error): 
Nov 06 08:10:02.679 E ns/openshift-monitoring pod/telemeter-client-587b5884dd-xhn9n node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=telemeter-client container exited with code 2 (Error): 
Nov 06 08:10:02.715 E ns/openshift-marketplace pod/community-operators-5c4b6466b5-rkrcs node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Nov 06 08:10:02.851 E ns/openshift-monitoring pod/thanos-querier-64894dfc6c-bgkht node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2019/11/06 07:15:57 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/06 07:15:57 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 07:15:57 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 07:15:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/06 07:15:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 07:15:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/06 07:15:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 07:15:57 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/06 07:15:57 http.go:96: HTTPS: listening on [::]:9091\n
Nov 06 08:10:03.101 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 08:10:03.101 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 08:10:03.101 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 08:10:03.101 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 08:10:03.101 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 08:10:23.036 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--lqhk5-w-b-p8gpn.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-06T08:10:19.058Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-06T08:10:19.063Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-06T08:10:19.063Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-06T08:10:19.064Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-06T08:10:19.064Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-06T08:10:19.065Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-06T08:10:19.065Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-06T08:10:19.065Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-06T08:10:19.065Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-06T08:10:19.065Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-06T08:10:19.065Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-06T08:10:19.065Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-06T08:10:19.065Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-06T08:10:19.065Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-06T08:10:19.067Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-06T08:10:19.067Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-06
Nov 06 08:10:28.840 E ns/openshift-ingress pod/router-default-7ff9769f68-plqp6 node/ci-op--lqhk5-w-d-f6d2v.c.openshift-gce-devel-ci.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 08:11:13.596 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 06 08:18:15.478 E ns/openshift-monitoring pod/thanos-querier-64894dfc6c-zcxrt node/ci-op--lqhk5-w-d-slpsq.c.openshift-gce-devel-ci.internal container=thanos-querier container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 08:18:15.478 E ns/openshift-monitoring pod/thanos-querier-64894dfc6c-zcxrt node/ci-op--lqhk5-w-d-slpsq.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 08:18:15.478 E ns/openshift-monitoring pod/thanos-querier-64894dfc6c-zcxrt node/ci-op--lqhk5-w-d-slpsq.c.openshift-gce-devel-ci.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 08:18:15.478 E ns/openshift-monitoring pod/thanos-querier-64894dfc6c-zcxrt node/ci-op--lqhk5-w-d-slpsq.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 08:18:15.512 E ns/openshift-ingress pod/router-default-7ff9769f68-p6bl7 node/ci-op--lqhk5-w-d-slpsq.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): outer.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:10:39.118Z	INFO	router.router	router/router.go:257	router is including routes in all namespaces\n2019-11-06T08:10:39.368Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:10:44.367Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:10:51.498Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:10:56.484Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:11:01.485Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:11:06.485Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:11:13.487Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:11:18.485Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:14:00.229Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-06T08:18:13.697Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 06 08:19:50.704 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--lqhk5-w-d-slpsq.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/06 08:18:38 Watching directory: "/etc/alertmanager/config"\n
Nov 06 08:19:50.704 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--lqhk5-w-d-slpsq.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/06 08:18:38 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 08:18:38 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 08:18:38 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 08:18:38 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/06 08:18:38 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 08:18:38 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 08:18:38 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 08:18:38 http.go:96: HTTPS: listening on [::]:9095\n
Nov 06 08:19:50.813 E ns/openshift-machine-config-operator pod/machine-config-daemon-7blxn node/ci-op--lqhk5-w-d-slpsq.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20191106-082137.xml

Find was mentions in log files


Show 57 Passed Tests

Show 8 Skipped Tests