ResultFAILURE
Tests 5 failed / 49 succeeded
Started2020-02-10 13:36
Elapsed2h20m
Work namespaceci-op-tn92ybgf
pod4.4.0-0.nightly-2020-02-10-133346-azure-serial
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 1h6m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
62 error level events were detected during this test run:

Feb 10 14:21:52.598 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/10 14:18:27 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 10 14:21:52.598 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus-proxy container exited with code 2 (Error): 2020/02/10 14:18:29 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/10 14:18:29 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/10 14:18:29 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/10 14:18:29 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/10 14:18:29 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/10 14:18:29 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/10 14:18:29 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/10 14:18:29 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/10 14:18:29 http.go:96: HTTPS: listening on [::]:9091\n
Feb 10 14:21:52.598 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-10T14:18:26.950658964Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.35.0'."\nlevel=info ts=2020-02-10T14:18:26.950838064Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-10T14:18:26.953212465Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-10T14:18:32.333244002Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 10 14:21:52.599 E ns/openshift-ingress pod/router-default-67c784dbd7-28clg node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=router container exited with code 2 (Error): ttp://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:18:28.812873       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:18:33.310432       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:18:38.324635       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:19:11.225126       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:19:33.611763       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:19:38.604761       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:19:44.256808       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:19:52.628680       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:20:31.805032       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:20:36.784825       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:21:49.062665       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 10 14:21:53.651 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=config-reloader container exited with code 2 (Error): 2020/02/10 14:17:07 Watching directory: "/etc/alertmanager/config"\n
Feb 10 14:21:53.651 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=alertmanager-proxy container exited with code 2 (Error): 2020/02/10 14:17:07 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 14:17:07 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/10 14:17:07 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/10 14:17:07 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/10 14:17:07 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/10 14:17:07 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 14:17:07 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/10 14:17:07 http.go:96: HTTPS: listening on [::]:9095\n
Feb 10 14:21:53.710 E ns/openshift-monitoring pod/prometheus-adapter-d4cd55986-mplv5 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus-adapter container exited with code 2 (Error): I0210 14:16:06.137098       1 adapter.go:93] successfully using in-cluster auth\nI0210 14:16:07.549163       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 10 14:22:09.750 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-10T14:22:06.598Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-10T14:22:06.603Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-10T14:22:06.606Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-10T14:22:06.606Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-10T14:22:06.607Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-10T14:22:06.607Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-10T14:22:06.613Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-10T14:22:06.613Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-10
Feb 10 14:26:28.250 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-10T14:22:06.598Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-10T14:22:06.603Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-10T14:22:06.606Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-10T14:22:06.606Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-10T14:22:06.607Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-10T14:22:06.607Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-10T14:22:06.607Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-10T14:22:06.613Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-10T14:22:06.613Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-10
Feb 10 14:26:28.250 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-10T14:22:07.049067596Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.35.0'."\nlevel=info ts=2020-02-10T14:22:07.049200996Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-10T14:22:07.051689498Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-10T14:22:12.476728361Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 10 14:26:28.288 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=config-reloader container exited with code 2 (Error): 2020/02/10 14:22:07 Watching directory: "/etc/alertmanager/config"\n
Feb 10 14:26:28.288 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=alertmanager-proxy container exited with code 2 (Error): 2020/02/10 14:22:07 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 14:22:07 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/10 14:22:07 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/10 14:22:07 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/10 14:22:07 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/10 14:22:07 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 14:22:07 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/10 14:22:07 http.go:96: HTTPS: listening on [::]:9095\n
Feb 10 14:26:28.356 E ns/openshift-machine-config-operator pod/machine-config-daemon-n6j4d node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=oauth-proxy container exited with code 143 (Error): 
Feb 10 14:27:40.492 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-10T14:27:38.252Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-10T14:27:38.256Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-10T14:27:38.257Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-10T14:27:38.259Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-10T14:27:38.259Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-10T14:27:38.269Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-10T14:27:38.269Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-10
Feb 10 14:31:07.217 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-10T14:27:38.252Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-10T14:27:38.256Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-10T14:27:38.257Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-10T14:27:38.259Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-10T14:27:38.259Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-10T14:27:38.259Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-10T14:27:38.269Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-10T14:27:38.269Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-10
Feb 10 14:31:07.217 E ns/openshift-machine-config-operator pod/machine-config-daemon-g7j6z node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=oauth-proxy container exited with code 143 (Error): 
Feb 10 14:31:07.264 E ns/openshift-image-registry pod/node-ca-l8mpf node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:32:20.391 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-10T14:32:17.516Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-10T14:32:17.529Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-10T14:32:17.529Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-10T14:32:17.531Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-10T14:32:17.531Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-10T14:32:17.535Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-10T14:32:17.535Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-10
Feb 10 14:45:58.354 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 10 14:46:13.710 E ns/openshift-monitoring pod/telemeter-client-554f4784f6-9lzgd node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=reload container exited with code 2 (Error): 
Feb 10 14:46:13.710 E ns/openshift-monitoring pod/telemeter-client-554f4784f6-9lzgd node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=telemeter-client container exited with code 2 (Error): 
Feb 10 14:46:16.853 E ns/openshift-monitoring pod/openshift-state-metrics-cdfb76f97-mmgpg node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=openshift-state-metrics container exited with code 2 (Error): 
Feb 10 14:46:16.902 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=config-reloader container exited with code 2 (Error): 2020/02/10 14:26:46 Watching directory: "/etc/alertmanager/config"\n
Feb 10 14:46:16.902 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=alertmanager-proxy container exited with code 2 (Error): 2020/02/10 14:26:46 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 14:26:46 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/10 14:26:46 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/10 14:26:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/10 14:26:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/10 14:26:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 14:26:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/10 14:26:46 http.go:96: HTTPS: listening on [::]:9095\n
Feb 10 14:46:16.950 E ns/openshift-marketplace pod/certified-operators-77c49cd85d-q2fnb node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=certified-operators container exited with code 2 (Error): 
Feb 10 14:46:16.975 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=config-reloader container exited with code 2 (Error): 2020/02/10 14:16:43 Watching directory: "/etc/alertmanager/config"\n
Feb 10 14:46:16.975 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=alertmanager-proxy container exited with code 2 (Error): 2020/02/10 14:16:43 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 14:16:43 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/10 14:16:43 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/10 14:16:43 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/10 14:16:43 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/10 14:16:43 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 14:16:43 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/10 14:16:43 http.go:96: HTTPS: listening on [::]:9095\n
Feb 10 14:46:17.007 E ns/openshift-monitoring pod/prometheus-adapter-d4cd55986-tlhlf node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=prometheus-adapter container exited with code 2 (Error): I0210 14:22:02.073273       1 adapter.go:93] successfully using in-cluster auth\nI0210 14:22:02.449046       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 10 14:46:17.072 E ns/openshift-marketplace pod/redhat-marketplace-587f6964c5-hn5qm node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=redhat-marketplace container exited with code 2 (Error): 
Feb 10 14:46:17.116 E ns/openshift-monitoring pod/kube-state-metrics-bd8f6d6cf-llxpl node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=kube-state-metrics container exited with code 2 (Error): 
Feb 10 14:46:17.165 E ns/openshift-kube-storage-version-migrator pod/migrator-568797b68f-vbftp node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=migrator container exited with code 2 (Error): 
Feb 10 14:46:17.191 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-849b48ccdd-krtqp node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=operator container exited with code 255 (Error): r at 36.447578ms\nI0210 14:23:51.999052       1 operator.go:145] Starting syncing operator at 2020-02-10 14:23:51.999040953 +0000 UTC m=+539.708621997\nI0210 14:23:52.267936       1 operator.go:147] Finished syncing operator at 268.889421ms\nI0210 14:23:52.267991       1 operator.go:145] Starting syncing operator at 2020-02-10 14:23:52.267986973 +0000 UTC m=+539.977568017\nI0210 14:23:52.878005       1 operator.go:147] Finished syncing operator at 610.012004ms\nI0210 14:24:01.061763       1 operator.go:145] Starting syncing operator at 2020-02-10 14:24:01.061755376 +0000 UTC m=+548.771336420\nI0210 14:24:01.094829       1 operator.go:147] Finished syncing operator at 33.067708ms\nI0210 14:24:01.094870       1 operator.go:145] Starting syncing operator at 2020-02-10 14:24:01.094865383 +0000 UTC m=+548.804446527\nI0210 14:24:01.143520       1 operator.go:147] Finished syncing operator at 48.65077ms\nI0210 14:24:01.143553       1 operator.go:145] Starting syncing operator at 2020-02-10 14:24:01.143549653 +0000 UTC m=+548.853130697\nI0210 14:24:01.183176       1 operator.go:147] Finished syncing operator at 39.620149ms\nI0210 14:24:01.183255       1 operator.go:145] Starting syncing operator at 2020-02-10 14:24:01.183231202 +0000 UTC m=+548.892812346\nI0210 14:24:01.471387       1 operator.go:147] Finished syncing operator at 288.14965ms\nI0210 14:24:01.471482       1 operator.go:145] Starting syncing operator at 2020-02-10 14:24:01.471460352 +0000 UTC m=+549.181041396\nI0210 14:24:02.073576       1 operator.go:147] Finished syncing operator at 602.109173ms\nI0210 14:41:42.878228       1 operator.go:145] Starting syncing operator at 2020-02-10 14:41:42.878218178 +0000 UTC m=+1610.587799322\nI0210 14:41:42.911542       1 operator.go:147] Finished syncing operator at 33.302305ms\nI0210 14:46:11.245246       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0210 14:46:11.245694       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0210 14:46:11.245720       1 builder.go:210] server exited\n
Feb 10 14:46:17.213 E ns/openshift-ingress pod/router-default-67c784dbd7-njfvn node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-mnwgl container=router container exited with code 2 (Error): ttp://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:32:24.282717       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:32:29.274700       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:32:34.274581       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:44:36.162543       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:44:41.157398       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:44:47.261726       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:44:55.235794       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:45:49.110000       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:45:54.904460       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:45:59.908980       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0210 14:46:10.867186       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 10 14:46:28.439 E ns/openshift-marketplace pod/redhat-operators-745cd55bdc-8zhdl node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus2-g7h7p container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:46:28.473 E ns/openshift-monitoring pod/telemeter-client-554f4784f6-qsz4l node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus2-g7h7p container=reload container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:46:28.473 E ns/openshift-monitoring pod/telemeter-client-554f4784f6-qsz4l node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus2-g7h7p container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:46:28.473 E ns/openshift-monitoring pod/telemeter-client-554f4784f6-qsz4l node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus2-g7h7p container=telemeter-client container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:46:29.519 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-849b48ccdd-rc5qz node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus2-g7h7p container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:46:41.164 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Feb 10 14:46:47.777 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-10T14:32:17.516Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-10T14:32:17.529Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-10T14:32:17.529Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-10T14:32:17.530Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-10T14:32:17.531Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-10T14:32:17.531Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-10T14:32:17.535Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-10T14:32:17.535Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-10
Feb 10 14:46:47.777 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-10T14:32:17.856806653Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.35.0'."\nlevel=info ts=2020-02-10T14:32:17.856922755Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-10T14:32:17.859175708Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-10T14:32:23.004931034Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 10 14:46:47.834 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:46:47.834 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:46:47.834 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:46:47.834 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:46:47.834 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:46:48.790 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=alertmanager container exited with code 2 (Error): level=info ts=2020-02-10T14:46:33.393Z caller=main.go:231 msg="Starting Alertmanager" version="(version=0.20.0, branch=rhaos-4.4-rhel-7, revision=0d174fe513d63c143165c1acc24ec8bd1f36f72a)"\nlevel=info ts=2020-02-10T14:46:33.393Z caller=main.go:232 build_context="(go=go1.13.4, user=root@6c5c1b7969e4, date=20200131-08:49:54)"\n
Feb 10 14:47:40.810 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-6b9b99dd49-m58rq node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus2-gv85k container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:47:58.951 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-ng98p container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-10T14:47:48.422Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-10T14:47:48.423Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-10T14:47:48.425Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-10T14:47:48.426Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-10T14:47:48.426Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-10T14:47:48.426Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-10T14:47:48.426Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-10T14:47:48.426Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-10T14:47:48.426Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-10T14:47:48.426Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-10T14:47:48.426Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-10T14:47:48.427Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-10T14:47:48.427Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-10T14:47:48.427Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-10T14:47:48.431Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-10T14:47:48.431Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-10
Feb 10 14:48:08.772 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:48:08.772 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus3-5tpsr container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 10 14:51:56.968 E clusteroperator/monitoring changed Degraded to True: UpdatingAlertmanagerFailed: Failed to rollout the stack. Error: running task Updating Alertmanager failed: waiting for Alertmanager object changes failed: waiting for Alertmanager: expected 3 replicas, updated 2 and available 2
Feb 10 15:12:15.865 E ns/openshift-monitoring pod/prometheus-adapter-d4cd55986-hr8c5 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-qzd6m container=prometheus-adapter container exited with code 2 (Error): I0210 14:47:04.851202       1 adapter.go:93] successfully using in-cluster auth\nI0210 14:47:05.361956       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 10 15:12:15.882 E ns/openshift-kube-storage-version-migrator pod/migrator-568797b68f-n827r node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-qzd6m container=migrator container exited with code 2 (Error): 
Feb 10 15:12:15.898 E ns/openshift-marketplace pod/redhat-marketplace-587f6964c5-h5bbn node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-qzd6m container=redhat-marketplace container exited with code 2 (Error): 
Feb 10 15:12:15.920 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-qzd6m container=config-reloader container exited with code 2 (Error): 2020/02/10 14:47:28 Watching directory: "/etc/alertmanager/config"\n
Feb 10 15:12:15.920 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-qzd6m container=alertmanager-proxy container exited with code 2 (Error): 2020/02/10 14:47:28 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 14:47:28 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/10 14:47:28 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/10 14:47:28 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/10 14:47:28 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/10 14:47:28 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 14:47:28 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/10 14:47:28 http.go:96: HTTPS: listening on [::]:9095\n
Feb 10 15:12:15.957 E ns/openshift-machine-config-operator pod/machine-config-daemon-shvf6 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-qzd6m container=oauth-proxy container exited with code 143 (Error): 
Feb 10 15:12:16.006 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-849b48ccdd-9n5rq node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-qzd6m container=operator container exited with code 255 (Error): 4       1 operator.go:145] Starting syncing operator at 2020-02-10 14:47:38.713110343 +0000 UTC m=+13.291130931\nI0210 14:47:38.720861       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-csi-snapshot-controller-operator", Name:"csi-snapshot-controller-operator", UID:"65c819f0-d507-4986-91f0-e1602f6da98c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("")\nI0210 14:47:38.757447       1 operator.go:147] Finished syncing operator at 44.334108ms\nI0210 14:47:38.757476       1 operator.go:145] Starting syncing operator at 2020-02-10 14:47:38.757472751 +0000 UTC m=+13.335493339\nI0210 14:47:38.792252       1 operator.go:147] Finished syncing operator at 34.776263ms\nI0210 15:07:27.021001       1 operator.go:145] Starting syncing operator at 2020-02-10 15:07:27.020989171 +0000 UTC m=+1201.599009859\nI0210 15:07:27.045373       1 operator.go:147] Finished syncing operator at 24.38038ms\nI0210 15:09:48.948132       1 operator.go:145] Starting syncing operator at 2020-02-10 15:09:48.948107655 +0000 UTC m=+1343.526128243\nI0210 15:09:48.969514       1 operator.go:147] Finished syncing operator at 21.40147ms\nI0210 15:12:12.067801       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0210 15:12:12.068156       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0210 15:12:12.068284       1 logging_controller.go:93] Shutting down LogLevelController\nI0210 15:12:12.068299       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0210 15:12:12.068311       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0210 15:12:12.068356       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nF0210 15:12:12.068370       1 builder.go:243] stopped\n
Feb 10 15:13:31.650 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-869b64b85b-fnbwr node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-qzd6m container=snapshot-controller container exited with code 2 (Error): 
Feb 10 15:13:31.664 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-qzd6m container=config-reloader container exited with code 2 (Error): 2020/02/10 15:12:33 Watching directory: "/etc/alertmanager/config"\n
Feb 10 15:13:31.664 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-tn92ybgf-0ba00-8n9j9-worker-centralus1-qzd6m container=alertmanager-proxy container exited with code 2 (Error): 2020/02/10 15:12:33 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 15:12:33 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/10 15:12:33 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/10 15:12:33 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/10 15:12:33 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/10 15:12:33 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/10 15:12:33 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/10 15:12:33 http.go:96: HTTPS: listening on [::]:9095\n

				
				Click to see stdout/stderrfrom junit_e2e_20200210-152823.xml

Find was mentions in log files


openshift-tests [Feature:Machines][Serial] Managed cluster should [Top Level] [Feature:Machines][Serial] Managed cluster should grow and decrease when scaling different machineSets simultaneously [Suite:openshift/conformance/serial] 14m50s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Machines\]\[Serial\]\sManaged\scluster\sshould\s\[Top\sLevel\]\s\[Feature\:Machines\]\[Serial\]\sManaged\scluster\sshould\sgrow\sand\sdecrease\swhen\sscaling\sdifferent\smachineSets\ssimultaneously\s\[Suite\:openshift\/conformance\/serial\]$'
fail [github.com/openshift/origin/test/extended/machines/scale.go:223]: Timed out after 420.000s.
Expected
    <bool>: false
to be true