ResultFAILURE
Tests 3 failed / 53 succeeded
Started2019-11-08 02:03
Elapsed2h26m
Work namespaceci-op-i0w50ynf
pod4.3.0-0.nightly-2019-11-08-015817-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h11m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
39 error level events were detected during this test run:

Nov 08 03:06:24.911 E ns/openshift-monitoring pod/grafana-85bf74556f-s45x9 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-dshsk container=grafana-proxy container exited with code 2 (Error): 
Nov 08 03:06:24.949 E ns/openshift-machine-config-operator pod/machine-config-daemon-vdjjn node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-dshsk container=oauth-proxy container exited with code 143 (Error): 
Nov 08 03:06:25.958 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-dshsk container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T02:45:45.416Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T02:45:45.418Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T02:45:45.421Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T02:45:45.423Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T02:45:45.423Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T02:45:45.423Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T02:45:45.423Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T02:45:45.423Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T02:45:45.423Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T02:45:45.423Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T02:45:45.423Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T02:45:45.423Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T02:45:45.423Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T02:45:45.424Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T02:45:45.430Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T02:45:45.430Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 03:06:25.958 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-dshsk container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 02:45:57 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2019/11/08 02:46:06 config map updated\n2019/11/08 02:46:06 error: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused\n
Nov 08 03:06:25.958 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-dshsk container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-08T02:45:57.361811333Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-08T02:45:57.361974534Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-08T02:45:57.364763056Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T02:46:02.36520296Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T02:46:07.557704785Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-08T02:47:27.662672524Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 08 03:06:25.995 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-dshsk container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 03:06:25.995 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-dshsk container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 03:06:25.995 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-dshsk container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 03:06:26.028 E ns/openshift-monitoring pod/prometheus-adapter-5776b48f-8pw7d node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-dshsk container=prometheus-adapter container exited with code 2 (Error): I1108 02:46:47.725375       1 adapter.go:93] successfully using in-cluster auth\nI1108 02:46:48.373382       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 03:06:58.030 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-26h99 container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T03:06:45.108Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T03:06:45.117Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T03:06:45.117Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T03:06:45.118Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T03:06:45.118Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T03:06:45.119Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T03:06:45.119Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T03:06:45.119Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T03:06:45.119Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T03:06:45.119Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T03:06:45.119Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T03:06:45.119Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T03:06:45.119Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T03:06:45.119Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T03:06:45.122Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T03:06:45.123Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 03:22:41.229 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 03:22:52.678 E ns/openshift-monitoring pod/prometheus-adapter-5776b48f-hq9cj node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-26h99 container=prometheus-adapter container exited with code 2 (Error): I1108 03:06:34.645833       1 adapter.go:93] successfully using in-cluster auth\nI1108 03:06:35.259922       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 03:22:52.699 E ns/openshift-marketplace pod/community-operators-68cb6c8c8b-85kgz node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-26h99 container=community-operators container exited with code 2 (Error): 
Nov 08 03:22:52.751 E ns/openshift-ingress pod/router-default-949758f74-lk9bf node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-26h99 container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:07:49.688Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:11:08.927Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:11:13.792Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:12:16.759Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:12:40.177Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:21:05.504Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:21:10.509Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:21:29.459Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:21:41.179Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:21:46.178Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:22:46.945Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 03:22:53.423 E ns/openshift-marketplace pod/redhat-operators-7c59bd8d76-km2mh node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-26h99 container=redhat-operators container exited with code 2 (Error): 
Nov 08 03:22:53.463 E ns/openshift-monitoring pod/openshift-state-metrics-6c78647cc7-429pz node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-26h99 container=openshift-state-metrics container exited with code 2 (Error): 
Nov 08 03:22:53.498 E ns/openshift-marketplace pod/certified-operators-794d849b9-scctr node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-26h99 container=certified-operators container exited with code 2 (Error): 
Nov 08 03:22:53.520 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-dz8gn node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-26h99 container=kube-state-metrics container exited with code 2 (Error): 
Nov 08 03:22:54.494 E ns/openshift-monitoring pod/telemeter-client-7b47d88d95-rccms node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-26h99 container=telemeter-client container exited with code 2 (Error): 
Nov 08 03:22:54.494 E ns/openshift-monitoring pod/telemeter-client-7b47d88d95-rccms node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-26h99 container=reload container exited with code 2 (Error): 
Nov 08 03:24:07.211 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T03:23:47.320Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T03:23:47.327Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T03:23:47.329Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T03:23:47.331Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T03:23:47.331Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T03:23:47.332Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T03:23:47.332Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T03:23:47.332Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T03:23:47.332Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T03:23:47.332Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T03:23:47.332Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T03:23:47.332Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T03:23:47.332Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T03:23:47.334Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T03:23:47.340Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T03:23:47.340Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 03:44:51.983 E ns/openshift-marketplace pod/community-operators-68cb6c8c8b-xt8b9 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-h5p7l container=community-operators container exited with code 2 (Error): 
Nov 08 03:44:52.012 E ns/openshift-ingress pod/router-default-949758f74-n4vtb node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-h5p7l container=router container exited with code 2 (Error): ": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:39:08.431Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:39:13.422Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:39:18.400Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:43:15.632Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:43:20.619Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\nE1108 03:43:27.330613       1 limiter.go:140] error reloading router: wait: no child processes\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n2019-11-08T03:43:32.342Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:43:58.387Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:44:03.367Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:44:39.792Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:44:44.770Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 03:44:52.101 E ns/openshift-monitoring pod/prometheus-adapter-5776b48f-xtzbl node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-h5p7l container=prometheus-adapter container exited with code 2 (Error): I1108 03:24:23.934557       1 adapter.go:93] successfully using in-cluster auth\nI1108 03:24:25.012850       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 04:01:00.740 E clusteroperator/machine-config changed Degraded to True: MachineConfigDaemonFailed: Failed to resync 4.3.0-0.nightly-2019-11-08-015817 because: Operation cannot be fulfilled on daemonsets.apps "machine-config-daemon": the object has been modified; please apply your changes to the latest version and try again
Nov 08 04:08:17.623 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-6k6rd node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=kube-state-metrics container exited with code 2 (OOMKilled): 
Nov 08 04:08:17.648 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 03:23:51 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 08 04:08:17.648 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=prometheus-proxy container exited with code 2 (Error): 2019/11/08 03:24:00 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 03:24:00 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 03:24:00 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 03:24:00 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 03:24:00 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 03:24:00 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 03:24:00 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 03:24:00 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 03:24:00 http.go:96: HTTPS: listening on [::]:9091\n
Nov 08 04:08:17.648 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-08T03:23:51.153717408Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-08T03:23:51.153991207Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-08T03:23:51.155800894Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T03:23:56.156700515Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T03:24:01.156534643Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T03:24:06.155744475Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T03:24:11.431147052Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 08 04:08:17.758 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=config-reloader container exited with code 2 (Error): 2019/11/08 03:45:11 Watching directory: "/etc/alertmanager/config"\n
Nov 08 04:08:17.758 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 03:45:12 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 03:45:12 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 03:45:12 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 03:45:12 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 03:45:12 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 03:45:12 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 03:45:12 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 03:45:12 http.go:96: HTTPS: listening on [::]:9095\n2019/11/08 03:46:25 server.go:3012: http: TLS handshake error from 10.128.4.33:39502: read tcp 10.128.4.35:9095->10.128.4.33:39502: read: connection reset by peer\n
Nov 08 04:08:17.871 E ns/openshift-marketplace pod/community-operators-68cb6c8c8b-s9cnx node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=community-operators container exited with code 2 (OOMKilled): 
Nov 08 04:08:17.922 E ns/openshift-monitoring pod/prometheus-adapter-5776b48f-sh659 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=prometheus-adapter container exited with code 2 (Error): I1108 03:45:02.497384       1 adapter.go:93] successfully using in-cluster auth\nI1108 03:45:03.014253       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 04:08:17.965 E ns/openshift-marketplace pod/certified-operators-794d849b9-25g7t node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=certified-operators container exited with code 2 (Error): 
Nov 08 04:08:18.087 E ns/openshift-monitoring pod/telemeter-client-7b47d88d95-ntfg6 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=reload container exited with code 2 (Error): 
Nov 08 04:08:18.087 E ns/openshift-monitoring pod/telemeter-client-7b47d88d95-ntfg6 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=telemeter-client container exited with code 2 (OOMKilled): 
Nov 08 04:08:18.137 E ns/openshift-ingress pod/router-default-949758f74-qvtbj node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:56:25.533Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T03:56:53.456Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:00:53.673Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:00:58.640Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:01:03.637Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:01:27.869Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:05:01.826Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:05:06.796Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:05:43.586Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:06:01.415Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T04:08:11.485Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 04:08:18.157 E ns/openshift-marketplace pod/redhat-operators-7c59bd8d76-cw2zq node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=redhat-operators container exited with code 2 (Error): 
Nov 08 04:09:08.863 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T04:09:06.194Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T04:09:06.202Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T04:09:06.202Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T04:09:06.203Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T04:09:06.204Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T04:09:06.208Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T04:09:06.208Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08

				
				Click to see stdout/stderrfrom junit_e2e_20191108-040929.xml

Find was mentions in log files


openshift-tests [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation [Suite:openshift/conformance/serial] [Suite:k8s] 2m3s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-scheduling\]\sSchedulerPriorities\s\[Serial\]\sPod\sshould\savoid\snodes\sthat\shave\savoidPod\sannotation\s\[Suite\:openshift\/conformance\/serial\]\s\[Suite\:k8s\]$'
fail [k8s.io/kubernetes/test/e2e/framework/util.go:1167]: Expected
    <string>: ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-h5p7l
not to equal
    <string>: ci-op-i0w50ynf-09f59-9g8bd-worker-centralus1-h5p7l
				
				Click to see stdout/stderrfrom junit_e2e_20191108-040929.xml

Filter through log files


operator Run template e2e-azure-serial - e2e-azure-serial container test 1h11m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=operator\sRun\stemplate\se2e\-azure\-serial\s\-\se2e\-azure\-serial\scontainer\stest$'
ed" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T04:09:06.203Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T04:09:06.203Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T04:09:06.204Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T04:09:06.208Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T04:09:06.208Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 04:09:09.073 I ns/openshift-monitoring pod/prometheus-k8s-1 Created container prometheus (2 times)
Nov 08 04:09:09.102 I ns/openshift-monitoring pod/prometheus-k8s-1 Started container prometheus (2 times)
Nov 08 04:09:09.867 W ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-i0w50ynf-09f59-9g8bd-worker-centralus3-srw5h container=prometheus container restarted

Failing tests:

[sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation [Suite:openshift/conformance/serial] [Suite:k8s]

Writing JUnit report to /tmp/artifacts/junit/junit_e2e_20191108-040929.xml

error: 1 fail, 47 pass, 173 skip (1h11m51s)

				from junit_operator.xml

Filter through log files


Show 53 Passed Tests

Show 173 Skipped Tests