ResultSUCCESS
Tests 2 failed / 1041 succeeded
Started2020-09-26 13:51
Elapsed1h17m
Work namespaceci-op-tvl9dl8b
Refs release-4.4:24a82622
284:0ea02080
pod6e1f5e50-ffff-11ea-acd4-0a580a810c73
repoopenshift/cluster-kube-scheduler-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 25m19s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
5 error level events were detected during this test run:

Sep 26 14:28:38.515 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-194-189.us-west-2.compute.internal node/ip-10-0-194-189.us-west-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): ds=454&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:28:36.768270       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/infrastructures?allowWatchBookmarks=true&resourceVersion=17322&timeout=9m26s&timeoutSeconds=566&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:28:36.769253       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/podmonitors?allowWatchBookmarks=true&resourceVersion=20832&timeout=8m17s&timeoutSeconds=497&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:28:36.770578       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operators.coreos.com/v1alpha1/subscriptions?allowWatchBookmarks=true&resourceVersion=17988&timeout=7m43s&timeoutSeconds=463&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:28:36.771539       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=21919&timeout=5m14s&timeoutSeconds=314&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0926 14:28:37.394780       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nI0926 14:28:37.394827       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-194-189_bdc7a9b4-fd05-4598-b72c-04894ef56421 stopped leading\nF0926 14:28:37.394869       1 controllermanager.go:291] leaderelection lost\nI0926 14:28:37.434566       1 gc_controller.go:99] Shutting down GC controller\n
Sep 26 14:28:39.518 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-194-189.us-west-2.compute.internal node/ip-10-0-194-189.us-west-2.compute.internal container=kube-scheduler container exited with code 255 (Error): ://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=21897&timeoutSeconds=552&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:28:37.613146       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=20392&timeout=6m6s&timeoutSeconds=366&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:28:37.614364       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=16496&timeout=5m28s&timeoutSeconds=328&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:28:37.619202       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=16496&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:28:37.622877       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=17159&timeout=6m10s&timeoutSeconds=370&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:28:37.624064       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=19898&timeout=7m42s&timeoutSeconds=462&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0926 14:28:38.471054       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0926 14:28:38.471132       1 server.go:257] leaderelection lost\n
Sep 26 14:29:02.579 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-194-189.us-west-2.compute.internal node/ip-10-0-194-189.us-west-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 26 14:29:11.587 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-194-189.us-west-2.compute.internal node/ip-10-0-194-189.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): okmarks=true&resourceVersion=21698&timeout=7m16s&timeoutSeconds=436&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:29:10.531164       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=21856&timeout=8m23s&timeoutSeconds=503&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:29:10.532215       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=21900&timeout=6m25s&timeoutSeconds=385&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:29:10.533360       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=21152&timeoutSeconds=587&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:29:10.534403       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=21143&timeout=7m6s&timeoutSeconds=426&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0926 14:29:10.535478       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=21208&timeout=5m58s&timeoutSeconds=358&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0926 14:29:10.537344       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0926 14:29:10.537375       1 leaderelection.go:67] leaderelection lost\n
Sep 26 14:43:30.963 E ns/default pod/recycler-for-nfs-n9jrp node/ip-10-0-180-168.us-west-2.compute.internal pod failed (DeadlineExceeded): Pod was active on the node longer than the specified deadline

				
				Click to see stdout/stderrfrom junit_e2e_20200926-145359.xml

Find failed mentions in log files


openshift-tests [Feature:APIServer] [Top Level] [Feature:APIServer] anonymous browsers should get a 403 from / [Suite:openshift/conformance/parallel] 13s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:APIServer\]\s\[Top\sLevel\]\s\[Feature\:APIServer\]\sanonymous\sbrowsers\sshould\sget\sa\s403\sfrom\s\/\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/util/client.go:695]: Sep 26 14:34:43.374: Get https://api.ci-op-tvl9dl8b-d14af.origin-ci-int-aws.dev.rhcloud.com:6443/apis/user.openshift.io/v1/users/~: net/http: TLS handshake timeout
				
				Click to see stdout/stderrfrom junit_e2e_20200926-145359.xml

Filter through log files


Show 1041 Passed Tests

Show 1326 Skipped Tests