ResultFAILURE
Tests 4 failed / 960 succeeded
Started2020-02-26 03:24
Elapsed1h34m
Work namespaceci-op-f27b56zp
Refs release-4.4:d6c183d3
332:0d92dcb5
pod7646c4b1-5847-11ea-8659-0a58ac102e8a
repoopenshift/cluster-version-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 35m44s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
2 error level events were detected during this test run:

Feb 26 04:15:48.675 E ns/default pod/recycler-for-nfs-jsvhc node/ip-10-0-86-249.ec2.internal pod failed (DeadlineExceeded): Pod was active on the node longer than the specified deadline
Feb 26 04:41:21.260 E ns/openshift-marketplace pod/csctestlabel-6597ff9b49-2h2tl node/ip-10-0-59-106.ec2.internal container=csctestlabel container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20200226-044707.xml

Find failed mentions in log files


openshift-tests [Feature:Prometheus][Late] Alerts [Top Level] [Feature:Prometheus][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Suite:openshift/conformance/parallel] 1m27s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Prometheus\]\[Late\]\sAlerts\s\[Top\sLevel\]\s\[Feature\:Prometheus\]\[Late\]\sAlerts\sshouldn\'t\sreport\sany\salerts\sin\sfiring\sstate\sapart\sfrom\sWatchdog\sand\sAlertmanagerReceiversNotConfigured\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:163]: Expected
    <map[string]error | len:1>: {
        "count_over_time(ALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|KubeAPILatencyHigh|FailingOperator\",alertstate=\"firing\"}[2h]) >= 1": {
            s: "promQL query: count_over_time(ALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|KubeAPILatencyHigh|FailingOperator\",alertstate=\"firing\"}[2h]) >= 1 had reported incorrect results:\n[{\"metric\":{\"alertname\":\"TargetDown\",\"alertstate\":\"firing\",\"job\":\"metrics\",\"namespace\":\"openshift-console-operator\",\"service\":\"metrics\",\"severity\":\"warning\"},\"value\":[1582692327.917,\"57\"]}]",
        },
    }
to be empty
				
				Click to see stdout/stderrfrom junit_e2e_20200226-044707.xml

Filter through log files


openshift-tests [sig-api-machinery] ResourceQuota [Top Level] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 48s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-api\-machinery\]\sResourceQuota\s\[Top\sLevel\]\s\[sig\-api\-machinery\]\sResourceQuota\sshould\screate\sa\sResourceQuota\sand\scapture\sthe\slife\sof\sa\ssecret\.\s\[Conformance\]\s\[Suite\:openshift\/conformance\/parallel\/minimal\]\s\[Suite\:k8s\]$'
fail [k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:166]: Unexpected error:
    <*errors.errorString | 0xc0002c23f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
				
				Click to see stdout/stderrfrom junit_e2e_20200226-044707.xml

Filter through log files


operator Run template e2e-aws-upi - e2e-aws-upi container test 38m56s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=operator\sRun\stemplate\se2e\-aws\-upi\s\-\se2e\-aws\-upi\scontainer\stest$'
4-4f29-8c14-02f0ca2548c3 error deleting EBS volume "vol-0cb4e50be69f62085" since volume is currently attached to "i-07f2f856dee2c425b"
Feb 26 04:44:08.312 I ns/openshift-etcd-operator deployment/etcd-operator Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator:\ncause by changes in data.ca-bundle.crt (16 times)
Feb 26 04:44:08.367 I ns/openshift-etcd-operator deployment/etcd-operator Updated Secret/etcd-client -n openshift-etcd-operator because it changed (16 times)
Feb 26 04:44:13.414 - 86s   I test="[Feature:Prometheus][Late] Alerts [Top Level] [Feature:Prometheus][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Suite:openshift/conformance/parallel]" running
Feb 26 04:44:50.568 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator Updated storage urls to https://10.0.59.226:2379,https://10.0.7.198:2379,https://10.0.78.54:2379,https://10.0.88.85:2379 (86 times)
Feb 26 04:44:50.577 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator Updated storage urls to https://10.0.59.226:2379,https://10.0.7.198:2379,https://10.0.78.54:2379,https://10.0.88.85:2379 (87 times)
Feb 26 04:45:40.307 I test="[Feature:Prometheus][Late] Alerts [Top Level] [Feature:Prometheus][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Suite:openshift/conformance/parallel]" failed

Flaky tests:

[sig-api-machinery] ResourceQuota [Top Level] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]

Failing tests:

[Feature:Prometheus][Late] Alerts [Top Level] [Feature:Prometheus][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Suite:openshift/conformance/parallel]

Writing JUnit report to /tmp/artifacts/junit/junit_e2e_20200226-044707.xml

error: 2 fail, 950 pass, 1337 skip (35m44s)

				from junit_operator.xml

Filter through log files


Show 960 Passed Tests

Show 1337 Skipped Tests