ResultSUCCESS
Tests 1 failed / 70 succeeded
Started2020-09-11 20:22
Elapsed1h15m
Work namespaceci-op-gkm1kqkv
Refs release-4.4:2f743203
294:f6453c86
pod8d7f5b47-f46c-11ea-b188-0a580a800cf0
repoopenshift/cluster-samples-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 17m55s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
5 error level events were detected during this test run:

Sep 11 21:01:57.241 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-221-32.us-west-1.compute.internal node/ip-10-0-221-32.us-west-1.compute.internal container=kube-controller-manager container exited with code 255 (Error): 61&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:01:56.442217       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:6443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=15521&timeout=5m38s&timeoutSeconds=338&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:01:56.443510       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/tuned.openshift.io/v1/profiles?allowWatchBookmarks=true&resourceVersion=17066&timeout=6m36s&timeoutSeconds=396&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:01:56.444605       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://localhost:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=20247&timeout=7m30s&timeoutSeconds=450&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:01:56.445591       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://localhost:6443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=15528&timeout=6m58s&timeoutSeconds=418&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:01:56.446771       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=20216&timeout=5m24s&timeoutSeconds=324&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0911 21:01:56.670548       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0911 21:01:56.670638       1 controllermanager.go:291] leaderelection lost\n
Sep 11 21:01:58.241 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-221-32.us-west-1.compute.internal node/ip-10-0-221-32.us-west-1.compute.internal container=kube-scheduler container exited with code 255 (Error): host:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=21552&timeout=9m29s&timeoutSeconds=569&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:01:57.254138       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=19723&timeout=9m21s&timeoutSeconds=561&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:01:57.256234       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=21729&timeoutSeconds=327&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:01:57.256976       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=21690&timeout=7m44s&timeoutSeconds=464&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:01:57.258807       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=15524&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:01:57.259558       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=21731&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0911 21:01:57.949538       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0911 21:01:57.949584       1 server.go:257] leaderelection lost\n
Sep 11 21:02:22.281 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-221-32.us-west-1.compute.internal node/ip-10-0-221-32.us-west-1.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 11 21:02:22.349 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-221-32.us-west-1.compute.internal node/ip-10-0-221-32.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): ntroller-manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=21672&timeout=6m52s&timeoutSeconds=412&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:02:21.083279       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=15550&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:02:21.084376       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=21719&timeout=5m55s&timeoutSeconds=355&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:02:21.093990       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=21186&timeout=8m7s&timeoutSeconds=487&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:02:21.095263       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=20570&timeoutSeconds=503&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0911 21:02:21.096475       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=16512&timeout=8m24s&timeoutSeconds=504&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0911 21:02:21.210117       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0911 21:02:21.210229       1 leaderelection.go:67] leaderelection lost\n
Sep 11 21:02:31.397 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-221-32.us-west-1.compute.internal node/ip-10-0-221-32.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): 02:30.346362       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://localhost:6443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=15521&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0911 21:02:30.347208       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0911 21:02:30.347304       1 policy_controller.go:94] leaderelection lost\nI0911 21:02:30.355390       1 clusterquotamapping.go:142] Shutting down ClusterQuotaMappingController controller\nI0911 21:02:30.355427       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-221-32 stopped leading\nI0911 21:02:30.355444       1 resource_quota_controller.go:290] Shutting down resource quota controller\nI0911 21:02:30.349972       1 reconciliation_controller.go:152] Shutting down ClusterQuotaReconcilationController\nI0911 21:02:30.355516       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0911 21:02:30.355501       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0911 21:02:30.355526       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0911 21:02:30.355530       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0911 21:02:30.355539       1 reconciliation_controller.go:307] resource quota controller worker shutting down\nI0911 21:02:30.355548       1 reconciliation_controller.go:307] resource quota controller worker shutting down\nI0911 21:02:30.355558       1 reconciliation_controller.go:307] resource quota controller worker shutting down\nI0911 21:02:30.355568       1 reconciliation_controller.go:307] resource quota controller worker shutting down\n

				
				Click to see stdout/stderrfrom junit_e2e_20200911-211857.xml

Filter through log files


Show 70 Passed Tests