ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2019-11-08 04:46
Elapsed1h42m
Work namespaceci-op-lckw6q1i
pod4.3.0-0.ci-2019-11-08-043905-gcp-serial

Test Failures


openshift-tests Monitor cluster while tests execute 55m26s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
23 error level events were detected during this test run:

Nov 08 05:23:31.695 E ns/openshift-marketplace pod/community-operators-6cc5bd8895-f7f9d node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Nov 08 05:23:31.732 E ns/openshift-machine-config-operator pod/machine-config-daemon-ddrv7 node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 05:23:32.828 E ns/openshift-monitoring pod/openshift-state-metrics-7f6b5cdb9f-fctpg node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=openshift-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 05:23:32.828 E ns/openshift-monitoring pod/openshift-state-metrics-7f6b5cdb9f-fctpg node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 05:23:32.843 E ns/openshift-monitoring pod/prometheus-adapter-6bc69b895d-vp5md node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I1108 05:11:05.731143       1 adapter.go:93] successfully using in-cluster auth\nI1108 05:11:06.578701       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 05:28:01.664 E ns/openshift-machine-config-operator pod/machine-config-daemon-frqmc node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 05:30:12.051 E ns/openshift-machine-config-operator pod/machine-config-daemon-v2jnt node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 05:30:12.066 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/08 05:28:15 Watching directory: "/etc/alertmanager/config"\n
Nov 08 05:30:12.066 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 05:28:15 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 05:28:15 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 05:28:15 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 05:28:16 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 05:28:16 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 05:28:16 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 05:28:16 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 05:28:16 http.go:96: HTTPS: listening on [::]:9095\n2019/11/08 05:28:17 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/08 05:28:20 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n
Nov 08 05:38:46.993 E ns/openshift-machine-config-operator pod/machine-config-daemon-d72hd node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 05:49:03.278 E ns/openshift-machine-config-operator pod/machine-config-daemon-mj7fd node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 05:50:39.536 E ns/openshift-machine-config-operator pod/machine-config-daemon-24ngz node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 05:52:13.676 E ns/openshift-machine-config-operator pod/machine-config-daemon-ncfwq node/ci-op--gfn6t-w-b-v7lzh.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 08 06:13:38.410 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 06:13:48.044 E ns/openshift-marketplace pod/certified-operators-6764874d9b-xmr94 node/ci-op--gfn6t-w-c-9pt88.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Nov 08 06:13:49.159 E ns/openshift-monitoring pod/telemeter-client-7c57dff75d-xg5kb node/ci-op--gfn6t-w-c-9pt88.c.openshift-gce-devel-ci.internal container=telemeter-client container exited with code 2 (Error): 
Nov 08 06:13:49.176 E ns/openshift-monitoring pod/telemeter-client-7c57dff75d-xg5kb node/ci-op--gfn6t-w-c-9pt88.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 06:13:49.176 E ns/openshift-monitoring pod/telemeter-client-7c57dff75d-xg5kb node/ci-op--gfn6t-w-c-9pt88.c.openshift-gce-devel-ci.internal container=reload container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 06:13:49.292 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--gfn6t-w-c-9pt88.c.openshift-gce-devel-ci.internal container=prometheus-proxy container exited with code 2 (Error):  05:12:01 oauthproxy.go:774: basicauth: 10.129.2.5:52344 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 05:16:32 oauthproxy.go:774: basicauth: 10.129.2.5:54612 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 05:21:02 oauthproxy.go:774: basicauth: 10.129.2.5:56226 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 05:25:32 oauthproxy.go:774: basicauth: 10.129.2.5:58678 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 05:30:02 oauthproxy.go:774: basicauth: 10.129.2.5:34156 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 05:34:32 oauthproxy.go:774: basicauth: 10.129.2.5:37878 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 05:39:03 oauthproxy.go:774: basicauth: 10.129.2.5:41554 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 05:43:33 oauthproxy.go:774: basicauth: 10.129.2.5:45332 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 05:48:03 oauthproxy.go:774: basicauth: 10.129.2.5:49084 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 05:52:33 oauthproxy.go:774: basicauth: 10.129.2.5:52794 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 05:57:03 oauthproxy.go:774: basicauth: 10.129.2.5:56484 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 06:01:33 oauthproxy.go:774: basicauth: 10.129.2.5:60148 Authorization header does not start with 'Basic', skipping basic authentication\n2
Nov 08 06:13:49.292 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--gfn6t-w-c-9pt88.c.openshift-gce-devel-ci.internal container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/08 05:11:16 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2019/11/08 05:21:45 config map updated\n2019/11/08 05:21:46 successfully triggered reload\n2019/11/08 05:22:59 config map updated\n2019/11/08 05:22:59 successfully triggered reload\n
Nov 08 06:13:49.292 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--gfn6t-w-c-9pt88.c.openshift-gce-devel-ci.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-08T05:11:15.672692494Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2019-11-08T05:11:15.672836017Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-08T05:11:15.674426595Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-08T05:11:20.810330317Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-08T05:12:41.121128628Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 08 06:14:09.000 E ns/openshift-ingress pod/router-default-7cd5bf64c5-4wq5d node/ci-op--gfn6t-w-d-dtctc.c.openshift-gce-devel-ci.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 06:14:15.019 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--gfn6t-w-b-xltw7.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-08T06:14:05.783Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-08T06:14:05.786Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-08T06:14:05.787Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-08T06:14:05.787Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-08T06:14:05.787Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-08T06:14:05.788Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-08T06:14:05.788Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-08T06:14:05.788Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-08T06:14:05.788Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-08T06:14:05.788Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-08T06:14:05.788Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-08T06:14:05.788Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-08T06:14:05.788Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-08T06:14:05.788Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-08T06:14:05.790Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-08T06:14:05.790Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-08

				
				Click to see stdout/stderrfrom junit_e2e_20191108-061745.xml

Find was mentions in log files


Show 57 Passed Tests

Show 8 Skipped Tests