ResultFAILURE
Tests 3 failed / 54 succeeded
Started2020-08-05 02:04
Elapsed1h32m
Work namespaceci-op-vtygki0g
pod4.3.0-0.nightly-2020-08-05-015829-gcp-serial
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 40m15s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
14 error level events were detected during this test run:

Aug 05 02:48:48.116 E ns/openshift-ingress pod/router-default-7c887bf465-85xnn node/ci-op-kgcnx-w-b-hstqw.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): ttp://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:34:46.650759       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:34:52.667766       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:34:57.662019       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:35:05.312366       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:35:10.314489       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:35:37.300855       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:35:42.283332       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:37:27.036845       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:37:32.028918       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:37:47.570836       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0805 02:48:46.788424       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Aug 05 02:48:48.135 E ns/openshift-monitoring pod/thanos-querier-8656db7474-xpvrx node/ci-op-kgcnx-w-b-hstqw.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2020/08/05 02:33:46 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/05 02:33:46 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/05 02:33:46 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/05 02:33:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/05 02:33:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/05 02:33:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/05 02:33:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/05 02:33:46 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/05 02:33:46 http.go:106: HTTPS: listening on [::]:9091\n
Aug 05 02:49:56.195 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-kgcnx-w-b-hstqw.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/08/05 02:48:56 Watching directory: "/etc/alertmanager/config"\n
Aug 05 02:49:56.195 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-kgcnx-w-b-hstqw.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/05 02:48:56 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/05 02:48:56 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/05 02:48:56 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/05 02:48:56 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/05 02:48:56 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/05 02:48:56 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/05 02:48:56 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/05 02:48:56 http.go:106: HTTPS: listening on [::]:9095\n
Aug 05 03:08:17.521 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Aug 05 03:08:36.143 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Aug 05 03:08:43.944 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kgcnx-w-d-fxcv9.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-05T02:34:14.231Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-05T02:34:14.235Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-05T02:34:14.236Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-05T02:34:14.237Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-05T02:34:14.237Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-05T02:34:14.237Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-05T02:34:14.238Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-05T02:34:14.238Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-05T02:34:14.238Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-05T02:34:14.238Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-05T02:34:14.238Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-05T02:34:14.238Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-05T02:34:14.238Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-05T02:34:14.238Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-05T02:34:14.241Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-05T02:34:14.241Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-05
Aug 05 03:08:43.944 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kgcnx-w-d-fxcv9.c.openshift-gce-devel-ci.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/05 02:34:14 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 05 03:08:43.944 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kgcnx-w-d-fxcv9.c.openshift-gce-devel-ci.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/05 02:34:14 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/05 02:34:14 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/05 02:34:14 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/05 02:34:15 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/05 02:34:15 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/05 02:34:15 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/05 02:34:15 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/05 02:34:15 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/05 02:34:15 http.go:106: HTTPS: listening on [::]:9091\n
Aug 05 03:08:43.944 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kgcnx-w-d-fxcv9.c.openshift-gce-devel-ci.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-05T02:34:14.382022276Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2020-08-05T02:34:14.382182452Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-05T02:34:14.384018321Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-05T02:34:19.603332715Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 05 03:09:03.586 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-kgcnx-w-b-hstqw.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-05T03:09:00.022Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-05T03:09:00.030Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-05T03:09:00.032Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-05T03:09:00.033Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-05T03:09:00.033Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-05T03:09:00.033Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-05T03:09:00.033Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-05T03:09:00.033Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-05T03:09:00.033Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-05T03:09:00.033Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-05T03:09:00.033Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-05T03:09:00.034Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-05T03:09:00.034Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-05T03:09:00.034Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-05T03:09:00.036Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-05T03:09:00.036Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-05
Aug 05 03:10:26.145 E ns/openshift-monitoring pod/prometheus-adapter-6df8b7bd8b-sn9j7 node/ci-op-kgcnx-w-d-w9kb8.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I0805 03:08:45.865770       1 adapter.go:93] successfully using in-cluster auth\nI0805 03:08:46.276396       1 secure_serving.go:116] Serving securely on [::]:6443\n
Aug 05 03:16:20.961 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-kgcnx-w-d-w9kb8.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/08/05 03:10:32 Watching directory: "/etc/alertmanager/config"\n
Aug 05 03:16:20.961 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-kgcnx-w-d-w9kb8.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/05 03:10:32 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/05 03:10:32 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/05 03:10:32 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/05 03:10:32 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/05 03:10:32 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/05 03:10:32 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/05 03:10:32 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/05 03:10:32 http.go:106: HTTPS: listening on [::]:9095\n

				
				Click to see stdout/stderrfrom junit_e2e_20200805-032631.xml

Filter through log files


openshift-tests [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining] [Suite:openshift/conformance/serial] [Suite:k8s] 59s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-api\-machinery\]\sNamespaces\s\[Serial\]\sshould\salways\sdelete\sfast\s\(ALL\sof\s100\snamespaces\sin\s150\sseconds\)\s\[Feature\:ComprehensiveNamespaceDraining\]\s\[Suite\:openshift\/conformance\/serial\]\s\[Suite\:k8s\]$'
fail [k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:50]: failed to create namespace: nslifetest-28
Unexpected error:
    <*errors.errorString | 0xc0002ec710>: {
        s: "watch closed before UntilWithoutRetry timeout",
    }
    watch closed before UntilWithoutRetry timeout
occurred
				
				Click to see stdout/stderrfrom junit_e2e_20200805-032631.xml

Filter through log files


operator Run template e2e-gcp-serial - e2e-gcp-serial container test 40m35s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=operator\sRun\stemplate\se2e\-gcp\-serial\s\-\se2e\-gcp\-serial\scontainer\stest$'
on/oauth-openshift-794ddc9bd-4x4vr to ci-op-kgcnx-m-1.c.openshift-gce-devel-ci.internal
Aug 05 03:22:45.524 I ns/openshift-authentication pod/oauth-openshift-794ddc9bd-4x4vr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:74c5014f2782c0c2571898ffbf08071809537be3895988692577806667ac6bc4" already present on machine
Aug 05 03:22:45.724 I ns/openshift-authentication pod/oauth-openshift-794ddc9bd-4x4vr Created container oauth-openshift
Aug 05 03:22:45.760 I ns/openshift-authentication pod/oauth-openshift-794ddc9bd-4x4vr Started container oauth-openshift
Aug 05 03:22:50.315 I ns/openshift-authentication deployment/oauth-openshift Scaled down replica set oauth-openshift-6c8955bcbc to 0
Aug 05 03:22:50.326 W ns/openshift-authentication pod/oauth-openshift-6c8955bcbc-fxt5f node/ci-op-kgcnx-m-0.c.openshift-gce-devel-ci.internal graceful deletion within 40s
Aug 05 03:22:50.341 I ns/openshift-authentication replicaset/oauth-openshift-6c8955bcbc Deleted pod: oauth-openshift-6c8955bcbc-fxt5f
Aug 05 03:22:50.355 I ns/openshift-authentication pod/oauth-openshift-6c8955bcbc-fxt5f Stopping container oauth-openshift
Aug 05 03:22:53.031 W clusteroperator/authentication changed Progressing to False
Aug 05 03:22:53.042 I ns/openshift-authentication-operator deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing changed from True to False ("") (2 times)
Aug 05 03:23:15.121 W ns/openshift-authentication pod/oauth-openshift-6c8955bcbc-2kqhx node/ci-op-kgcnx-m-1.c.openshift-gce-devel-ci.internal deleted
Aug 05 03:23:17.464 W ns/openshift-authentication pod/oauth-openshift-6c8955bcbc-fxt5f node/ci-op-kgcnx-m-0.c.openshift-gce-devel-ci.internal deleted

Failing tests:

[sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining] [Suite:openshift/conformance/serial] [Suite:k8s]

Writing JUnit report to /tmp/artifacts/junit/junit_e2e_20200805-032631.xml

error: 1 fail, 49 pass, 8 skip (40m15s)

				from junit_operator.xml

Filter through log files


Show 54 Passed Tests

Show 8 Skipped Tests