ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2019-11-08 00:01
Elapsed1h39m
Work namespaceci-op-fystkhds
pod4.3.0-0.nightly-2019-11-07-235426-gcp-serial

Test Failures


openshift-tests Monitor cluster while tests execute 54m53s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
23 error level events were detected during this test run:

Nov 08 00:42:29.813 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 00:42:38.376 E ns/openshift-monitoring pod/grafana-85bf74556f-xwbt6 node/ci-op--bqfmn-w-d-ql8g7.c.openshift-gce-devel-ci.internal container=grafana-proxy container exited with code 2 (Error): 
Nov 08 00:42:38.390 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--bqfmn-w-d-ql8g7.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/08 00:28:37 Watching directory: "/etc/alertmanager/config"\n
Nov 08 00:42:38.390 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--bqfmn-w-d-ql8g7.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 00:28:37 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 00:28:37 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 00:28:37 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 00:28:37 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 00:28:37 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 00:28:37 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 00:28:37 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 00:28:37 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 00:42:58.334 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--bqfmn-w-b-l7d6t.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T00:42:53.285Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T00:42:53.289Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T00:42:53.289Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T00:42:53.290Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T00:42:53.290Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T00:42:53.290Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T00:42:53.290Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T00:42:53.290Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T00:42:53.290Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T00:42:53.290Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T00:42:53.290Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T00:42:53.290Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T00:42:53.290Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T00:42:53.290Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T00:42:53.293Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T00:42:53.293Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 00:50:41.528 E ns/openshift-machine-config-operator pod/machine-config-daemon-kqwl9 node/ci-op--bqfmn-w-c-vrsgm.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 00:50:42.633 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op--bqfmn-w-c-vrsgm.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/08 00:29:22 Watching directory: "/etc/alertmanager/config"\n
Nov 08 00:50:42.633 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op--bqfmn-w-c-vrsgm.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 00:29:23 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 00:29:23 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 00:29:23 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 00:29:23 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 00:29:23 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 00:29:23 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 00:29:23 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 00:29:23 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 00:50:42.657 E ns/openshift-monitoring pod/thanos-querier-5ddfc789fd-ldxk4 node/ci-op--bqfmn-w-c-vrsgm.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2019/11/08 00:42:45 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 00:42:45 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 00:42:45 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 00:42:45 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 00:42:45 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 00:42:45 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2019/11/08 00:42:45 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 00:42:45 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 00:42:45 http.go:96: HTTPS: listening on [::]:9091\n
Nov 08 00:50:42.687 E ns/openshift-ingress pod/router-default-56666d9db-k7b69 node/ci-op--bqfmn-w-c-vrsgm.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:42:35.170Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:42:40.166Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:42:45.170Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:42:50.162Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:42:55.170Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:43:00.162Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:43:05.168Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:43:13.771Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:43:18.769Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:43:23.771Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-08T00:44:52.640Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 08 00:51:00.698 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--bqfmn-w-c-vrsgm.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T00:50:58.590Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T00:50:58.594Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T00:50:58.594Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T00:50:58.595Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T00:50:58.595Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T00:50:58.595Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T00:50:58.595Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T00:50:58.595Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T00:50:58.595Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T00:50:58.595Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T00:50:58.595Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T00:50:58.595Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T00:50:58.595Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T00:50:58.595Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T00:50:58.597Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T00:50:58.597Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08
Nov 08 00:53:39.265 E ns/openshift-monitoring pod/thanos-querier-5ddfc789fd-cvpnb node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=thanos-querier container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 00:53:39.265 E ns/openshift-monitoring pod/thanos-querier-5ddfc789fd-cvpnb node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 00:53:39.265 E ns/openshift-monitoring pod/thanos-querier-5ddfc789fd-cvpnb node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 00:53:39.265 E ns/openshift-monitoring pod/thanos-querier-5ddfc789fd-cvpnb node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 00:53:39.323 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 00:53:39.323 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 00:53:39.323 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:05:48.404 E ns/openshift-machine-config-operator pod/machine-config-daemon-2vmfh node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 01:09:50.731 E ns/openshift-sdn pod/sdn-vz9b4 node/ci-op--bqfmn-m-2.c.openshift-gce-devel-ci.internal container=sdn container exited with code 255 (Error): se\nI1108 01:09:42.741181    5574 vnids.go:162] Dissociate netid 6677712 from namespace "e2e-pv-6055"\nI1108 01:09:43.435269    5574 vnids.go:148] Associate netid 3999694 to namespace "e2e-taint-single-pod-1557" with mcEnabled false\nI1108 01:09:47.932752    5574 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-authentication/oauth-openshift:https to [10.128.0.48:6443 10.129.0.39:6443 10.130.0.44:6443]\nI1108 01:09:47.932894    5574 roundrobin.go:218] Delete endpoint 10.128.0.48:6443 for service "openshift-authentication/oauth-openshift:https"\nI1108 01:09:47.932994    5574 proxy.go:334] hybrid proxy: syncProxyRules start\nI1108 01:09:47.995758    5574 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-authentication/oauth-openshift:https to [10.128.0.48:6443 10.130.0.44:6443]\nI1108 01:09:47.995924    5574 roundrobin.go:218] Delete endpoint 10.129.0.39:6443 for service "openshift-authentication/oauth-openshift:https"\nI1108 01:09:48.191204    5574 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI1108 01:09:48.302814    5574 proxier.go:371] userspace proxy: processing 0 service events\nI1108 01:09:48.302915    5574 proxier.go:350] userspace syncProxyRules took 111.571773ms\nI1108 01:09:48.302935    5574 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI1108 01:09:48.302955    5574 proxy.go:334] hybrid proxy: syncProxyRules start\nI1108 01:09:48.581459    5574 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI1108 01:09:48.679497    5574 proxier.go:371] userspace proxy: processing 0 service events\nI1108 01:09:48.679528    5574 proxier.go:350] userspace syncProxyRules took 98.037241ms\nI1108 01:09:48.679539    5574 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI1108 01:09:49.810096    5574 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Broken pipe)\nF1108 01:09:49.810246    5574 healthcheck.go:99] SDN healthcheck detected unhealthy OVS server, restarting: plugin is not setup\n
Nov 08 01:10:45.074 E ns/openshift-machine-config-operator pod/machine-config-daemon-k7kgc node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 01:17:48.220 E ns/openshift-machine-config-operator pod/machine-config-daemon-9bhn6 node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 01:20:12.598 E ns/openshift-machine-config-operator pod/machine-config-daemon-gndjg node/ci-op--bqfmn-w-d-zzcqg.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20191108-013005.xml

Find was mentions in log files


Show 57 Passed Tests

Show 8 Skipped Tests