ResultFAILURE
Tests 3 failed / 913 succeeded
Started2020-03-25 15:36
Elapsed1h3m
Work namespaceci-op-i6hv0szt
Refs master:24d4772d
458:2cfb0348
pod4a520736-6eae-11ea-aeec-0a58ac103b3a
repoopenshift/cluster-image-registry-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 21m7s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
7 error level events were detected during this test run:

Mar 25 16:09:12.146 E ns/openshift-monitoring pod/prometheus-k8s-0 node/compute-2 container=rules-configmap-reloader container exited with code 2 (Error): 2020/03/25 16:08:05 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Mar 25 16:09:12.146 E ns/openshift-monitoring pod/prometheus-k8s-0 node/compute-2 container=prometheus-proxy container exited with code 2 (Error): 2020/03/25 16:08:05 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/03/25 16:08:05 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/25 16:08:05 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/25 16:08:05 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/03/25 16:08:05 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/25 16:08:05 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/03/25 16:08:05 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/25 16:08:05 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0325 16:08:05.640348       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/03/25 16:08:05 http.go:107: HTTPS: listening on [::]:9091\n2020/03/25 16:08:46 oauthproxy.go:774: basicauth: 10.128.2.6:47580 Authorization header does not start with 'Basic', skipping basic authentication\n
Mar 25 16:09:12.146 E ns/openshift-monitoring pod/prometheus-k8s-0 node/compute-2 container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-03-25T16:08:05.137836921Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-03-25T16:08:05.14000824Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-03-25T16:08:10.241940054Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-03-25T16:08:10.241997836Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Mar 25 16:09:28.183 E ns/openshift-monitoring pod/prometheus-k8s-0 node/compute-2 container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-03-25T16:09:26.448Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-03-25T16:09:26.450Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-03-25T16:09:26.452Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-03-25T16:09:26.452Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-03-25T16:09:26.452Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-03-25T16:09:26.452Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-03-25T16:09:26.453Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-03-25T16:09:26.453Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-03-25T16:09:26.453Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-03-25T16:09:26.453Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-03-25T16:09:26.453Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-03-25T16:09:26.453Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-03-25T16:09:26.453Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-03-25T16:09:26.453Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-03-25T16:09:26.453Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-03-25T16:09:26.453Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-03-25
Mar 25 16:13:56.001 E ns/openshift-marketplace pod/opsrctestlabel-859f84668-xgzvg node/compute-0 container=opsrctestlabel container exited with code 2 (Error): 
Mar 25 16:13:57.031 E ns/openshift-marketplace pod/csctestlabel-64bb968cb4-m45vp node/compute-0 container=csctestlabel container exited with code 2 (Error): 
Mar 25 16:17:42.931 E ns/openshift-marketplace pod/samename-7c8dd66ccd-dhr2p node/compute-1 container=samename container exited with code 2 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20200325-163240.xml

Filter through log files


openshift-tests [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over two minutes [Suite:openshift/conformance/parallel] 2m4s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-arch\]\sManaged\scluster\sshould\shave\sno\scrashlooping\spods\sin\score\snamespaces\sover\stwo\sminutes\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/operators/cluster.go:113]: Expected
    <[]string | len:1, cap:1>: [
        "Pod openshift-etcd/etcd-control-plane-2 is not healthy: container etcd has restarted more than 5 times",
    ]
to be empty
				
				Click to see stdout/stderrfrom junit_e2e_20200325-163240.xml

Filter through log files


operator Run template e2e-vsphere - e2e-vsphere container test 23m40s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=operator\sRun\stemplate\se2e\-vsphere\s\-\se2e\-vsphere\scontainer\stest$'
her
Mar 25 16:24:41.156 - 164s  W ns/openshift-must-gather-jkhsq pod/must-gather-cjm79 node/control-plane-1 pod has been pending longer than a minute
Mar 25 16:25:08.565 I ns/openshift-etcd-operator deployment/etcd-operator Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator:\ncause by changes in data.ca-bundle.crt (9 times)
Mar 25 16:25:08.628 I ns/openshift-etcd-operator deployment/etcd-operator Updated Secret/etcd-client -n openshift-etcd-operator because it changed (9 times)
Mar 25 16:25:13.489 I ns/openshift-etcd-operator deployment/etcd-operator unhealthy members: control-plane-0,control-plane-2,control-plane-1 (18 times)
Mar 25 16:25:37.493 I ns/openshift-etcd-operator deployment/etcd-operator unhealthy members: control-plane-1 (21 times)
Mar 25 16:27:32.899 I ns/openshift-must-gather-jkhsq pod/must-gather-cjm79 Container image "registry.svc.ci.openshift.org/ci-op-i6hv0szt/stable@sha256:ceb7dcc18f78cc3013d0dff3211ac9e422cbd08a4220e0a660db0fd319a746f7" already present on machine
Mar 25 16:27:33.191 I ns/openshift-must-gather-jkhsq pod/must-gather-cjm79 Created container copy
Mar 25 16:27:33.216 I ns/openshift-must-gather-jkhsq pod/must-gather-cjm79 Started container copy
Mar 25 16:28:24.886 I ns/openshift-etcd-operator deployment/etcd-operator Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator:\ncause by changes in data.ca-bundle.crt (10 times)
Mar 25 16:28:24.934 I ns/openshift-etcd-operator deployment/etcd-operator Updated Secret/etcd-client -n openshift-etcd-operator because it changed (10 times)
Mar 25 16:28:44.141 W ns/openshift-must-gather-jkhsq pod/must-gather-cjm79 node/control-plane-1 graceful deletion within 0s
Mar 25 16:28:44.144 W ns/openshift-must-gather-jkhsq pod/must-gather-cjm79 node/control-plane-1 deleted

Failing tests:

[sig-arch] Managed cluster should have no crashlooping pods in core namespaces over two minutes [Suite:openshift/conformance/parallel]

Writing JUnit report to /tmp/artifacts/junit/junit_e2e_20200325-163240.xml

error: 1 fail, 903 pass, 1386 skip (21m7s)

				from junit_operator.xml

Find has mentions in log files


Show 913 Passed Tests

Show 1386 Skipped Tests