ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2020-02-13 21:35
Elapsed1h26m
Work namespaceci-op-bgkxfmdt
pod4.4.0-0.nightly-2020-02-13-212616-gcp-serial
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 43m30s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
35 error level events were detected during this test run:

Feb 13 22:09:54.085 E ns/openshift-monitoring pod/grafana-7d66cf85bd-56lwl node/ci-op-gvlx8-w-b-bnzsp.c.openshift-gce-devel-ci.internal container=grafana container exited with code 1 (Error): 
Feb 13 22:09:54.085 E ns/openshift-monitoring pod/grafana-7d66cf85bd-56lwl node/ci-op-gvlx8-w-b-bnzsp.c.openshift-gce-devel-ci.internal container=grafana-proxy container exited with code 2 (Error): 
Feb 13 22:09:54.132 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-64c84bcf8c-kqpx6 node/ci-op-gvlx8-w-b-bnzsp.c.openshift-gce-devel-ci.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 13 22:09:54.145 E ns/openshift-monitoring pod/thanos-querier-8bb74f9c5-n7rjs node/ci-op-gvlx8-w-b-bnzsp.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/13 22:03:22 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/13 22:03:22 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/13 22:03:22 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/13 22:03:22 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/13 22:03:22 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/13 22:03:22 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/13 22:03:22 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/13 22:03:22 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0213 22:03:22.516877       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/13 22:03:22 http.go:107: HTTPS: listening on [::]:9091\n
Feb 13 22:10:04.828 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-gvlx8-w-b-bnzsp.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-13T22:10:02.856Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-13T22:10:02.860Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-13T22:10:02.861Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-13T22:10:02.862Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-13T22:10:02.862Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-13T22:10:02.862Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-13T22:10:02.862Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-13T22:10:02.862Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-13T22:10:02.862Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-13T22:10:02.862Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-13T22:10:02.862Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-13T22:10:02.862Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-13T22:10:02.862Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-13T22:10:02.863Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-13T22:10:02.865Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-13T22:10:02.865Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-13
Feb 13 22:21:07.399 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 13 22:21:10.635 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-gvlx8-w-b-bnzsp.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/02/13 22:10:02 Watching directory: "/etc/alertmanager/config"\n
Feb 13 22:21:10.635 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-gvlx8-w-b-bnzsp.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/13 22:10:02 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/13 22:10:02 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/13 22:10:02 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/13 22:10:02 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/13 22:10:02 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/13 22:10:02 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/13 22:10:02 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/13 22:10:02 http.go:107: HTTPS: listening on [::]:9095\nI0213 22:10:02.961928       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 13 22:21:10.691 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-gvlx8-w-b-bnzsp.c.openshift-gce-devel-ci.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/13 22:10:03 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 13 22:21:10.691 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-gvlx8-w-b-bnzsp.c.openshift-gce-devel-ci.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/13 22:10:04 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/13 22:10:04 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/13 22:10:04 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/13 22:10:04 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/13 22:10:04 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/13 22:10:04 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/13 22:10:04 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/13 22:10:04 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0213 22:10:04.247529       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/13 22:10:04 http.go:107: HTTPS: listening on [::]:9091\n2020/02/13 22:14:52 oauthproxy.go:774: basicauth: 10.129.0.16:41132 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 13 22:21:10.691 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-gvlx8-w-b-bnzsp.c.openshift-gce-devel-ci.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-13T22:10:03.262393371Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.35.1'."\nlevel=info ts=2020-02-13T22:10:03.262558531Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-13T22:10:03.264047659Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-13T22:10:08.396100394Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 13 22:21:23.637 E ns/openshift-marketplace pod/certified-operators-79df58d768-8cmd7 node/ci-op-gvlx8-w-c-qghjf.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Feb 13 22:21:24.722 E ns/openshift-marketplace pod/redhat-marketplace-5b5c59dd47-6jpt6 node/ci-op-gvlx8-w-c-qghjf.c.openshift-gce-devel-ci.internal container=redhat-marketplace container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 22:21:24.761 E ns/openshift-marketplace pod/redhat-operators-5665978d58-5jp5h node/ci-op-gvlx8-w-c-qghjf.c.openshift-gce-devel-ci.internal container=redhat-operators container exited with code 2 (Error): 
Feb 13 22:21:24.796 E ns/openshift-ingress pod/router-default-c497cc987-jg9xz node/ci-op-gvlx8-w-c-qghjf.c.openshift-gce-devel-ci.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 22:21:32.705 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-gvlx8-w-b-nwvrq.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-13T22:21:28.561Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-13T22:21:28.568Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-13T22:21:28.571Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-13T22:21:28.572Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-13T22:21:28.572Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-13T22:21:28.572Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-13T22:21:28.572Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-13T22:21:28.572Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-13T22:21:28.572Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-13T22:21:28.572Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-13T22:21:28.572Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-13T22:21:28.572Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-13T22:21:28.572Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-13T22:21:28.572Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-13T22:21:28.575Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-13T22:21:28.575Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-13
Feb 13 22:21:37.104 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-64c84bcf8c-p626l node/ci-op-gvlx8-w-d-qxw8h.c.openshift-gce-devel-ci.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 13 22:21:37.120 E ns/openshift-monitoring pod/prometheus-adapter-66966d7cd-nfjq6 node/ci-op-gvlx8-w-d-qxw8h.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I0213 22:01:18.385922       1 adapter.go:93] successfully using in-cluster auth\nI0213 22:01:20.122333       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 13 22:21:37.158 E ns/openshift-ingress pod/router-default-c497cc987-69s8j node/ci-op-gvlx8-w-d-qxw8h.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): ttp://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:20:27.186235       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:20:39.053520       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:20:45.620901       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:20:50.620830       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:20:58.289973       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:21:05.184473       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:21:10.181505       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:21:15.910517       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:21:20.909916       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:21:25.910413       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0213 22:21:30.911423       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 13 22:21:37.218 E ns/openshift-monitoring pod/grafana-7d66cf85bd-4prms node/ci-op-gvlx8-w-d-qxw8h.c.openshift-gce-devel-ci.internal container=grafana container exited with code 1 (Error): 
Feb 13 22:21:37.218 E ns/openshift-monitoring pod/grafana-7d66cf85bd-4prms node/ci-op-gvlx8-w-d-qxw8h.c.openshift-gce-devel-ci.internal container=grafana-proxy container exited with code 2 (Error): 
Feb 13 22:21:38.123 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-gvlx8-w-d-qxw8h.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/13 22:03:08 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/13 22:03:08 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/13 22:03:08 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/13 22:03:08 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/13 22:03:08 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/13 22:03:08 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/13 22:03:08 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0213 22:03:08.444982       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/13 22:03:08 http.go:107: HTTPS: listening on [::]:9095\n
Feb 13 22:21:38.123 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-gvlx8-w-d-qxw8h.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (OOMKilled): 2020/02/13 22:03:08 Watching directory: "/etc/alertmanager/config"\n
Feb 13 22:21:38.202 E ns/openshift-monitoring pod/thanos-querier-8bb74f9c5-lbcjf node/ci-op-gvlx8-w-d-qxw8h.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/13 22:03:17 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/13 22:03:17 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/13 22:03:17 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/13 22:03:17 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/13 22:03:17 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/13 22:03:17 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/13 22:03:17 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/13 22:03:17 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0213 22:03:17.879374       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/13 22:03:17 http.go:107: HTTPS: listening on [::]:9091\n
Feb 13 22:21:38.218 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-544d444d99-rpn27 node/ci-op-gvlx8-w-d-qxw8h.c.openshift-gce-devel-ci.internal container=operator container exited with code 255 (Error):  m=+906.952428543\nI0213 22:16:25.640967       1 operator.go:147] Finished syncing operator at 40.536431ms\nI0213 22:16:25.641037       1 operator.go:145] Starting syncing operator at 2020-02-13 22:16:25.64103293 +0000 UTC m=+906.993043823\nI0213 22:16:25.672096       1 operator.go:147] Finished syncing operator at 31.04937ms\nI0213 22:16:25.672161       1 operator.go:145] Starting syncing operator at 2020-02-13 22:16:25.672155933 +0000 UTC m=+907.024166863\nI0213 22:16:25.968923       1 operator.go:147] Finished syncing operator at 296.753588ms\nI0213 22:16:25.968984       1 operator.go:145] Starting syncing operator at 2020-02-13 22:16:25.968979465 +0000 UTC m=+907.320990292\nI0213 22:16:26.568712       1 operator.go:147] Finished syncing operator at 599.725333ms\nI0213 22:16:27.701615       1 operator.go:145] Starting syncing operator at 2020-02-13 22:16:27.701598184 +0000 UTC m=+909.053609022\nI0213 22:16:27.736224       1 operator.go:147] Finished syncing operator at 34.614077ms\nI0213 22:16:27.736298       1 operator.go:145] Starting syncing operator at 2020-02-13 22:16:27.736292102 +0000 UTC m=+909.088302936\nI0213 22:16:27.790962       1 operator.go:147] Finished syncing operator at 54.654539ms\nI0213 22:16:27.791020       1 operator.go:145] Starting syncing operator at 2020-02-13 22:16:27.791015605 +0000 UTC m=+909.143026438\nI0213 22:16:28.365236       1 operator.go:147] Finished syncing operator at 574.168864ms\nI0213 22:16:28.365291       1 operator.go:145] Starting syncing operator at 2020-02-13 22:16:28.365287356 +0000 UTC m=+909.717298179\nI0213 22:16:28.965093       1 operator.go:147] Finished syncing operator at 599.796087ms\nI0213 22:21:35.554033       1 operator.go:145] Starting syncing operator at 2020-02-13 22:21:35.55401763 +0000 UTC m=+1216.906028465\nI0213 22:21:35.649010       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0213 22:21:35.649717       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0213 22:21:35.649756       1 builder.go:210] server exited\n
Feb 13 22:21:52.116 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Feb 13 22:22:10.477 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-56c799ffb7-4jst8 node/ci-op-gvlx8-w-c-sbjth.c.openshift-gce-devel-ci.internal container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 22:22:13.016 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-gvlx8-w-d-xz57b.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-13T22:22:08.570Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-13T22:22:08.575Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-13T22:22:08.576Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-13T22:22:08.577Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-13T22:22:08.577Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-13T22:22:08.577Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-13T22:22:08.577Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-13T22:22:08.577Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-13T22:22:08.577Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-13T22:22:08.577Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-13T22:22:08.577Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-13T22:22:08.577Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-13T22:22:08.577Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-13T22:22:08.577Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-13T22:22:08.581Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-13T22:22:08.581Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-13
Feb 13 22:31:42.007 E ns/openshift-machine-config-operator pod/machine-config-daemon-8lbxv node/ci-op-gvlx8-w-c-sbjth.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 13 22:32:08.046 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-867d567fd5-jfwqd node/ci-op-gvlx8-w-c-sbjth.c.openshift-gce-devel-ci.internal container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 22:32:11.117 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Feb 13 22:36:18.954 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-gvlx8-w-c-sbjth.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/02/13 22:31:57 Watching directory: "/etc/alertmanager/config"\n
Feb 13 22:36:18.954 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-gvlx8-w-c-sbjth.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/13 22:31:57 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/13 22:31:57 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/13 22:31:57 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/13 22:31:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/13 22:31:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/13 22:31:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/13 22:31:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0213 22:31:57.264284       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/13 22:31:57 http.go:107: HTTPS: listening on [::]:9095\n
Feb 13 22:40:56.394 E ns/openshift-authentication pod/oauth-openshift-55db8d6f69-4l254 node/ci-op-gvlx8-m-0.c.openshift-gce-devel-ci.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 22:45:26.601 E ns/openshift-machine-config-operator pod/machine-config-daemon-8c4mg node/ci-op-gvlx8-w-c-sbjth.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20200213-225218.xml

Find was mentions in log files


Show 57 Passed Tests

Show 25 Skipped Tests