ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2019-11-08 06:07
Elapsed1h37m
Work namespaceci-op-n6hrpzw1
pod4.3.0-0.nightly-2019-11-08-060454-aws-serial

Test Failures


openshift-tests Monitor cluster while tests execute 58m27s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
28 error level events were detected during this test run:

Nov 08 06:39:48.663 E ns/openshift-monitoring pod/thanos-querier-9fc8b5b5b-2lvth node/ip-10-0-132-144.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2019/11/08 06:34:09 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 06:34:09 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 06:34:09 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 06:34:09 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 06:34:09 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 06:34:09 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 06:34:09 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 06:34:09 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 06:34:09 http.go:96: HTTPS: listening on [::]:9091\n
Nov 08 06:39:48.679 E ns/openshift-machine-config-operator pod/machine-config-daemon-zkfl4 node/ip-10-0-132-144.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 06:39:48.696 E ns/openshift-ingress pod/router-default-695d4c97d9-c28zc node/ip-10-0-132-144.ec2.internal container=router container exited with code 2 (Error): ng http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:35:08.562Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:35:13.558Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:35:39.858Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:35:44.858Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:35:49.856Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\nE1108 06:36:08.259602       1 limiter.go:140] error reloading router: waitid: no child processes\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n2019-11-08T06:36:53.713Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:38:14.028Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:38:19.023Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T06:38:24.033Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 06:39:48.715 E ns/openshift-monitoring pod/prometheus-adapter-7c7dbd6cd7-8vpdv node/ip-10-0-132-144.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I1108 06:34:09.711168       1 adapter.go:93] successfully using in-cluster auth\nI1108 06:34:10.153585       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 06:39:49.763 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-144.ec2.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 06:39:49.763 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-144.ec2.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 06:39:49.763 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-144.ec2.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 06:45:16.410 E ns/openshift-machine-config-operator pod/machine-config-daemon-gs5wf node/ip-10-0-132-144.ec2.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 06:45:16.410 E ns/openshift-machine-config-operator pod/machine-config-daemon-gs5wf node/ip-10-0-132-144.ec2.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 06:57:23.803 E ns/openshift-machine-config-operator pod/machine-config-daemon-xjx2k node/ip-10-0-132-144.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 06:58:39.949 E ns/openshift-machine-config-operator pod/machine-config-daemon-zlp79 node/ip-10-0-132-144.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 07:09:25.392 E ns/openshift-authentication pod/oauth-openshift-c5c9ff7f5-v58j5 node/ip-10-0-138-204.ec2.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:16:10.968 E ns/openshift-machine-config-operator pod/machine-config-daemon-f68mx node/ip-10-0-132-144.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 07:20:33.855 E ns/openshift-dns pod/dns-default-znvkj node/ip-10-0-132-144.ec2.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:20:33.855 E ns/openshift-dns pod/dns-default-znvkj node/ip-10-0-132-144.ec2.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:29:33.865 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 07:29:36.932 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-30.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 06:34:29 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 08 07:29:36.932 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-30.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 9/11/08 06:34:32 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 06:34:32 http.go:96: HTTPS: listening on [::]:9091\n2019/11/08 06:35:07 oauthproxy.go:774: basicauth: 10.129.2.4:34972 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 06:39:37 oauthproxy.go:774: basicauth: 10.129.2.4:37322 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 06:44:07 oauthproxy.go:774: basicauth: 10.129.2.4:40996 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 06:48:37 oauthproxy.go:774: basicauth: 10.129.2.4:44768 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 06:53:08 oauthproxy.go:774: basicauth: 10.129.2.4:48472 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 06:57:38 oauthproxy.go:774: basicauth: 10.129.2.4:52194 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 07:02:08 oauthproxy.go:774: basicauth: 10.129.2.4:55982 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 07:06:38 oauthproxy.go:774: basicauth: 10.129.2.4:59706 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 07:11:08 oauthproxy.go:774: basicauth: 10.129.2.4:35242 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 07:15:38 oauthproxy.go:774: basicauth: 10.129.2.4:38960 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 07:20:08 oauthproxy.go:774: basicauth: 10.129.2.4:42726 Authorization header does not start with 'Basic', skipping basic authentication\n2
Nov 08 07:29:36.932 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-30.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-08T06:34:29.008481286Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-08T06:34:29.008629576Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-08T06:34:29.011196249Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2019-11-08T06:34:34.01530091Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T06:34:39.113412073Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-08T06:36:05.639392922Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 08 07:29:36.981 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-pwf5k node/ip-10-0-132-30.ec2.internal container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:29:36.981 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-pwf5k node/ip-10-0-132-30.ec2.internal container=kube-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:29:36.981 E ns/openshift-monitoring pod/kube-state-metrics-c6cdd9b44-pwf5k node/ip-10-0-132-30.ec2.internal container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:29:37.924 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-30.ec2.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:29:37.924 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-30.ec2.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:29:37.924 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-132-30.ec2.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:29:37.985 E ns/openshift-marketplace pod/community-operators-77ff6ccb96-gjcjv node/ip-10-0-132-30.ec2.internal container=community-operators container exited with code 2 (Error): 
Nov 08 07:29:51.099 E ns/openshift-marketplace pod/certified-operators-7575588c57-8k8ph node/ip-10-0-151-226.ec2.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 07:30:04.966 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-144.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T07:30:01.665Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T07:30:01.668Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T07:30:01.668Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T07:30:01.669Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T07:30:01.669Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T07:30:01.669Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T07:30:01.669Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T07:30:01.669Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T07:30:01.669Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T07:30:01.669Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T07:30:01.669Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T07:30:01.669Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T07:30:01.669Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T07:30:01.669Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T07:30:01.670Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T07:30:01.670Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08

				
				Click to see stdout/stderrfrom junit_e2e_20191108-073414.xml

Find was mentions in log files


Show 57 Passed Tests

Show 171 Skipped Tests