ResultSUCCESS
Tests 1 failed / 54 succeeded
Started2020-08-05 02:04
Elapsed1h50m
Work namespaceci-op-xhbxcz3v
pod4.3.0-0.nightly-2020-08-05-015829-azure-serial
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 49m52s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
7 error level events were detected during this test run:

Aug 05 02:56:57.733 E ns/openshift-ingress pod/router-default-5554785787-7sqwx node/ci-op-xhbxcz3v-09f59-8qz5c-worker-centralus1-7wl9q container=router container exited with code 2 (Error): ttp://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:46:58.029905       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:48:14.503006       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:48:26.518062       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:48:37.012475       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:49:01.882180       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:49:06.885009       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:49:11.893430       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:49:16.874699       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:49:47.876950       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:50:07.018219       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:54:33.208791       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Aug 05 02:57:17.466 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-xhbxcz3v-09f59-8qz5c-worker-centralus3-vdpjq container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-05T02:57:12.009Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-05T02:57:12.018Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-05T02:57:12.019Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-05T02:57:12.021Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-05T02:57:12.021Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-05T02:57:12.021Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-05T02:57:12.021Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-05T02:57:12.021Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-05T02:57:12.021Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-05T02:57:12.021Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-05T02:57:12.021Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-05T02:57:12.021Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-05T02:57:12.021Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-05T02:57:12.022Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-05T02:57:12.024Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-05T02:57:12.024Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-05
Aug 05 03:30:52.415 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Aug 05 03:31:08.485 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-xhbxcz3v-09f59-8qz5c-worker-centralus3-vdpjq container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/05 02:57:12 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 05 03:31:08.485 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-xhbxcz3v-09f59-8qz5c-worker-centralus3-vdpjq container=prometheus-proxy container exited with code 2 (Error): 2020/08/05 02:57:16 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/05 02:57:16 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/05 02:57:16 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/05 02:57:16 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/05 02:57:16 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/05 02:57:16 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/05 02:57:16 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/05 02:57:16 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/05 02:57:16 http.go:106: HTTPS: listening on [::]:9091\n
Aug 05 03:31:08.485 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-xhbxcz3v-09f59-8qz5c-worker-centralus3-vdpjq container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-05T02:57:12.215759838Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2020-08-05T02:57:12.215909451Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-05T02:57:12.21743188Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-08-05T02:57:17.217390038Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-05T02:57:22.38627338Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 05 03:31:35.093 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-xhbxcz3v-09f59-8qz5c-worker-centralus1-rvlwd container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-05T03:31:25.127Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-05T03:31:25.133Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-05T03:31:25.135Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-05T03:31:25.136Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-05T03:31:25.136Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-05T03:31:25.136Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-05T03:31:25.136Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-05T03:31:25.136Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-05T03:31:25.136Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-05T03:31:25.136Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-05T03:31:25.136Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-05T03:31:25.137Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-05T03:31:25.137Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-05T03:31:25.137Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-05T03:31:25.143Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-05T03:31:25.143Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-05

				
				Click to see stdout/stderrfrom junit_e2e_20200805-033844.xml

Filter through log files


Show 54 Passed Tests

Show 173 Skipped Tests