ResultFAILURE
Tests 4 failed / 978 succeeded
Started2020-02-23 16:12
Elapsed1h36m
Work namespaceci-op-dr4n1n47
Refs release-4.4:98773b31
3158:b2a45ad0
pod36425ac5-5657-11ea-867d-0a58ac10434c
repoopenshift/installer
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 27m40s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
2 error level events were detected during this test run:

Feb 23 17:13:17.342 E ns/openshift-marketplace pod/samename-586ff7647d-xl2pz node/ci-op-vmbl7-w-0.c.openshift-gce-devel-ci.internal container=samename container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 23 17:16:59.925 E ns/openshift-controller-manager pod/controller-manager-6zrr5 node/ci-op-vmbl7-m-1.c.openshift-gce-devel-ci.internal container=controller-manager container exited with code 2 (Error): r-manager/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:71 +0xbe\n\ngoroutine 33846 [sync.Cond.Wait]:\nruntime.goparkunlock(...)\n	/usr/local/go/src/runtime/proc.go:310\nsync.runtime_notifyListWait(0xc0072b96c0, 0xc000000005)\n	/usr/local/go/src/runtime/sema.go:510 +0xf8\nsync.(*Cond).Wait(0xc0072b96b0)\n	/usr/local/go/src/sync/cond.go:56 +0x9d\ngolang.org/x/net/http2.(*pipe).Read(0xc0072b96a8, 0xc005ee535c, 0x4, 0x4, 0x0, 0x0, 0x0)\n	/go/src/github.com/openshift/openshift-controller-manager/vendor/golang.org/x/net/http2/pipe.go:64 +0xa6\ngolang.org/x/net/http2.transportResponseBody.Read(0xc0072b9680, 0xc005ee535c, 0x4, 0x4, 0x0, 0x0, 0x0)\n	/go/src/github.com/openshift/openshift-controller-manager/vendor/golang.org/x/net/http2/transport.go:1959 +0xac\nio.ReadAtLeast(0x7f3d9d9d6608, 0xc0072b9680, 0xc005ee535c, 0x4, 0x4, 0x4, 0x0, 0x203002, 0x203002)\n	/usr/local/go/src/io/io.go:310 +0x87\nk8s.io/apimachinery/pkg/util/framer.(*lengthDelimitedFrameReader).Read(0xc005f560a0, 0xc0093e8000, 0x800, 0xa80, 0x253c360, 0x0, 0x38)\n	/go/src/github.com/openshift/openshift-controller-manager/vendor/k8s.io/apimachinery/pkg/util/framer/framer.go:76 +0x275\nk8s.io/apimachinery/pkg/runtime/serializer/streaming.(*decoder).Decode(0xc007a9ec80, 0x0, 0x2544d20, 0xc008044980, 0x0, 0x0, 0x0, 0xc007f22838, 0x45d4d0)\n	/go/src/github.com/openshift/openshift-controller-manager/vendor/k8s.io/apimachinery/pkg/runtime/serializer/streaming/streaming.go:77 +0x89\nk8s.io/client-go/rest/watch.(*Decoder).Decode(0xc005f560c0, 0xc0006a4fa8, 0x5, 0x253c360, 0xc003d65800, 0x0, 0x0)\n	/go/src/github.com/openshift/openshift-controller-manager/vendor/k8s.io/client-go/rest/watch/decoder.go:49 +0x7c\nk8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc008261200)\n	/go/src/github.com/openshift/openshift-controller-manager/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:104 +0x175\ncreated by k8s.io/apimachinery/pkg/watch.NewStreamWatcher\n	/go/src/github.com/openshift/openshift-controller-manager/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:71 +0xbe\n

				
				Click to see stdout/stderrfrom junit_e2e_20200223-173639.xml

Find was mentions in log files


openshift-tests [Feature:Platform] Managed cluster should [Top Level] [Feature:Platform] Managed cluster should have no crashlooping pods in core namespaces over two minutes [Suite:openshift/conformance/parallel] 2m4s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Platform\]\sManaged\scluster\sshould\s\[Top\sLevel\]\s\[Feature\:Platform\]\sManaged\scluster\sshould\shave\sno\scrashlooping\spods\sin\score\snamespaces\sover\stwo\sminutes\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/operators/cluster.go:110]: Expected
    <[]string | len:1, cap:1>: [
        "Pod openshift-controller-manager/controller-manager-jx8m9 is not healthy: I0223 16:53:08.262956       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0223 16:53:08.265574       1 controller_manager.go:50] DeploymentConfig controller using images from \"registry.svc.ci.openshift.org/ci-op-dr4n1n47/stable@sha256:16d809ed1af890c0e6fc37430c9b160be7b6ca3385ce5e13757e6b083b38153a\"\nI0223 16:53:08.265607       1 controller_manager.go:56] Build controller using images from \"registry.svc.ci.openshift.org/ci-op-dr4n1n47/stable@sha256:7879941c90e892ad970d271baf72df785701712dec53b02252a2d2d61275d807\"\nI0223 16:53:08.265725       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0223 16:53:08.266461       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n",
    ]
to be empty
				
				Click to see stdout/stderrfrom junit_e2e_20200223-173639.xml

Filter through log files


openshift-tests [Feature:Prometheus][Late] Alerts [Top Level] [Feature:Prometheus][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Suite:openshift/conformance/parallel] 1m20s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Prometheus\]\[Late\]\sAlerts\s\[Top\sLevel\]\s\[Feature\:Prometheus\]\[Late\]\sAlerts\sshouldn\'t\sreport\sany\salerts\sin\sfiring\sstate\sapart\sfrom\sWatchdog\sand\sAlertmanagerReceiversNotConfigured\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/prometheus/prometheus_builds.go:163]: Expected
    <map[string]error | len:1>: {
        "count_over_time(ALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|KubeAPILatencyHigh|etcdInsufficientMembers|FailingOperator\",alertstate=\"firing\"}[2h]) >= 1": {
            s: "promQL query: count_over_time(ALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|KubeAPILatencyHigh|etcdInsufficientMembers|FailingOperator\",alertstate=\"firing\"}[2h]) >= 1 had reported incorrect results:\n[{\"metric\":{\"alertname\":\"KubeDaemonSetRolloutStuck\",\"alertstate\":\"firing\",\"daemonset\":\"controller-manager\",\"endpoint\":\"https-main\",\"instance\":\"10.129.2.4:8443\",\"job\":\"kube-state-metrics\",\"namespace\":\"openshift-controller-manager\",\"pod\":\"kube-state-metrics-6cf9d54c98-fztwq\",\"service\":\"kube-state-metrics\",\"severity\":\"critical\"},\"value\":[1582479263.662,\"30\"]}]",
        },
    }
to be empty
				
				Click to see stdout/stderrfrom junit_e2e_20200223-173639.xml

Filter through log files


operator Run template e2e-gcp-upi - e2e-gcp-upi container test 31m13s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=operator\sRun\stemplate\se2e\-gcp\-upi\s\-\se2e\-gcp\-upi\scontainer\stest$'
e@sha256:a273f5ac7f1ad8f7ffab45205ac36c8dff92d9107ef3ae429eeb135fa8057b8b" already present on machine
Feb 23 17:31:37.911 I ns/openshift-must-gather-j2g87 pod/must-gather-f6mpb Created container copy
Feb 23 17:31:37.949 I ns/openshift-must-gather-j2g87 pod/must-gather-f6mpb Started container copy
Feb 23 17:31:45.631 W ns/openshift-must-gather-j2g87 pod/must-gather-f6mpb node/ci-op-vmbl7-m-2.c.openshift-gce-devel-ci.internal graceful deletion within 0s
Feb 23 17:31:45.635 W ns/openshift-must-gather-j2g87 pod/must-gather-f6mpb node/ci-op-vmbl7-m-2.c.openshift-gce-devel-ci.internal deleted
Feb 23 17:32:15.258 I persistentvolume/pvc-ef7e3118-1e37-4c97-adc1-eedb06bd501b googleapi: Error 400: The disk resource 'projects/openshift-gce-devel-ci/zones/us-east1-b/disks/ci-op-vmbl7-dynamic-pvc-ef7e3118-1e37-4c97-adc1-eedb06bd501b' is already being used by 'projects/openshift-gce-devel-ci/zones/us-east1-b/instances/ci-op-vmbl7-w-0', resourceInUseByAnotherResource
Feb 23 17:33:15.614 - 80s   I test="[Feature:Prometheus][Late] Alerts [Top Level] [Feature:Prometheus][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Suite:openshift/conformance/parallel]" running
Feb 23 17:34:35.924 I test="[Feature:Prometheus][Late] Alerts [Top Level] [Feature:Prometheus][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Suite:openshift/conformance/parallel]" failed

Failing tests:

[Feature:Platform] Managed cluster should [Top Level] [Feature:Platform] Managed cluster should have no crashlooping pods in core namespaces over two minutes [Suite:openshift/conformance/parallel]
[Feature:Prometheus][Late] Alerts [Top Level] [Feature:Prometheus][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Suite:openshift/conformance/parallel]

Writing JUnit report to /tmp/artifacts/junit/junit_e2e_20200223-173639.xml

error: 2 fail, 956 pass, 1319 skip (27m40s)

				from junit_operator.xml

Filter through log files


Show 978 Passed Tests

Show 1319 Skipped Tests