ResultFAILURE
Tests 3 failed / 105 succeeded
Started2020-09-18 22:17
Elapsed1h54m
Work namespaceci-op-1iy9q5ls
podb7c99aae-f9fc-11ea-a1fd-0a580a800db2
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 1h7m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
29 error level events were detected during this test run:

Sep 18 22:52:49.387 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-159-123.us-west-1.compute.internal node/ip-10-0-159-123.us-west-1.compute.internal container/kube-scheduler container exited with code 255 (Error): olume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=16933&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 22:52:48.398827       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=16945&timeout=7m19s&timeoutSeconds=439&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 22:52:48.399852       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=23293&timeout=6m50s&timeoutSeconds=410&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 22:52:48.403130       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=19653&timeout=8m0s&timeoutSeconds=480&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 22:52:48.404192       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=20371&timeout=5m27s&timeoutSeconds=327&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 22:52:48.406564       1 reflector.go:382] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=23094&timeoutSeconds=546&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0918 22:52:48.509985       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0918 22:52:48.510029       1 server.go:244] leaderelection lost\n
Sep 18 22:53:14.585 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-159-123.us-west-1.compute.internal node/ip-10-0-159-123.us-west-1.compute.internal container/setup init container exited with code 124 (Error): ................................................................................
Sep 18 22:53:14.590 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-159-123.us-west-1.compute.internal node/ip-10-0-159-123.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): 443: connect: connection refused\nE0918 22:53:13.506712       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ServiceAccount: Get https://localhost:6443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=23169&timeout=7m27s&timeoutSeconds=447&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 22:53:13.509028       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ResourceQuota: Get https://localhost:6443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=16933&timeout=7m35s&timeoutSeconds=455&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 22:53:13.510611       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.LimitRange: Get https://localhost:6443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=16932&timeout=8m35s&timeoutSeconds=515&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 22:53:13.511856       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=19653&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0918 22:53:13.513935       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Route: Get https://localhost:6443/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=22048&timeout=7m57s&timeoutSeconds=477&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0918 22:53:14.228033       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nI0918 22:53:14.228441       1 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-159-123 stopped leading\nF0918 22:53:14.228746       1 policy_controller.go:94] leaderelection lost\n
Sep 18 22:56:54.043 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-544d5cfb5d-zm9qq node/ip-10-0-156-240.us-west-1.compute.internal container/snapshot-controller container exited with code 2 (Error): 
Sep 18 22:57:20.671 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-148-129.us-west-1.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-18T22:57:15.258Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-18T22:57:15.261Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-18T22:57:15.263Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-18T22:57:15.264Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-18T22:57:15.264Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-18T22:57:15.264Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-18T22:57:15.264Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-18T22:57:15.264Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-18T22:57:15.264Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-18T22:57:15.264Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-18T22:57:15.264Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-18T22:57:15.264Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-18T22:57:15.264Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-18T22:57:15.264Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-18T22:57:15.265Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-18T22:57:15.265Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-18
Sep 18 23:21:27.414 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 18 23:23:01.348 E ns/openshift-monitoring pod/openshift-state-metrics-db6bd67f8-h42lg node/ip-10-0-148-129.us-west-1.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
Sep 18 23:23:01.375 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-544d5cfb5d-scj8z node/ip-10-0-148-129.us-west-1.compute.internal container/snapshot-controller container exited with code 2 (Error): 
Sep 18 23:23:16.016 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-156-240.us-west-1.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-18T23:23:14.681Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-18T23:23:14.688Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-18T23:23:14.689Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-18T23:23:14.690Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-18T23:23:14.690Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-18T23:23:14.690Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-18T23:23:14.690Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-18T23:23:14.690Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-18T23:23:14.690Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-18T23:23:14.690Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-18T23:23:14.690Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-18T23:23:14.690Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-18T23:23:14.690Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-18T23:23:14.690Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-18T23:23:14.691Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-18T23:23:14.691Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-18
Sep 18 23:25:41.784 E ns/e2e-volumelimits-4002 pod/csi-hostpath-attacher-0 node/ip-10-0-183-183.us-west-1.compute.internal container/csi-attacher container exited with code 2 (Error): 
Sep 18 23:25:41.810 E ns/e2e-volumelimits-4002 pod/csi-hostpathplugin-0 node/ip-10-0-183-183.us-west-1.compute.internal container/liveness-probe container exited with code 2 (Error): 
Sep 18 23:25:41.810 E ns/e2e-volumelimits-4002 pod/csi-hostpathplugin-0 node/ip-10-0-183-183.us-west-1.compute.internal container/hostpath container exited with code 2 (Error): 
Sep 18 23:25:41.810 E ns/e2e-volumelimits-4002 pod/csi-hostpathplugin-0 node/ip-10-0-183-183.us-west-1.compute.internal container/node-driver-registrar container exited with code 2 (Error): 
Sep 18 23:25:41.831 E ns/e2e-volumelimits-4002 pod/csi-hostpath-provisioner-0 node/ip-10-0-183-183.us-west-1.compute.internal container/csi-provisioner container exited with code 255 (Error): Lost connection to CSI driver, exiting
Sep 18 23:28:30.074 E ns/e2e-daemonsets-2778 pod/daemon-set-7t7k4 node/ip-10-0-247-22.us-west-1.compute.internal container/app container exited with code 2 (Error): 
Sep 18 23:28:30.134 E ns/e2e-daemonsets-2778 pod/daemon-set-m96xb node/ip-10-0-183-183.us-west-1.compute.internal container/app container exited with code 2 (Error): 
Sep 18 23:28:31.658 E ns/e2e-daemonsets-2778 pod/daemon-set-79g6z node/ip-10-0-156-240.us-west-1.compute.internal container/app container exited with code 2 (Error): 
Sep 18 23:32:09.641 E ns/e2e-daemonsets-8655 pod/daemon-set-c67h8 node/ip-10-0-183-183.us-west-1.compute.internal reason/Failed (): 
Sep 18 23:37:17.584 E ns/e2e-test-ldap-group-sync-tgsrp pod/groupsync node/ip-10-0-183-183.us-west-1.compute.internal container/groupsync init container exited with code 137 (Error): 
Sep 18 23:37:17.584 E ns/e2e-test-ldap-group-sync-tgsrp pod/groupsync node/ip-10-0-183-183.us-west-1.compute.internal reason/Failed (): 
Sep 18 23:37:17.584 E ns/e2e-test-ldap-group-sync-tgsrp pod/groupsync node/ip-10-0-183-183.us-west-1.compute.internal container/groupsync container exited with code 137 (Error): 
Sep 18 23:39:13.006 E ns/openshift-marketplace pod/redhat-marketplace-d6f49b467-qqc4l node/ip-10-0-156-240.us-west-1.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Sep 18 23:40:39.191 E ns/e2e-test-topology-manager-xz4x8 pod/test-gls9w node/ip-10-0-156-240.us-west-1.compute.internal container/test-0 container exited with code 137 (Error): 
Sep 18 23:44:46.302 E ns/openshift-monitoring pod/kube-state-metrics-759b4bd968-2fr8q node/ip-10-0-183-183.us-west-1.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Sep 18 23:44:46.322 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-183-183.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): 2020/09/18 23:23:34 Watching directory: "/etc/alertmanager/config"\n
Sep 18 23:44:46.322 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-183-183.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/09/18 23:23:34 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 23:23:34 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 23:23:34 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 23:23:34 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/18 23:23:34 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 23:23:34 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 23:23:34 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0918 23:23:34.700615       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 23:23:34 http.go:107: HTTPS: listening on [::]:9095\n2020/09/18 23:23:43 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n
Sep 18 23:48:08.383 E ns/e2e-test-topology-manager-bqwhs pod/test-76zsd node/ip-10-0-183-183.us-west-1.compute.internal container/test-0 container exited with code 137 (Error): 
Sep 18 23:49:44.635 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-183-183.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): 2020/09/18 23:44:55 Watching directory: "/etc/alertmanager/config"\n
Sep 18 23:49:44.635 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-183-183.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/09/18 23:44:55 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 23:44:55 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 23:44:55 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 23:44:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/18 23:44:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 23:44:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 23:44:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0918 23:44:55.793256       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 23:44:55 http.go:107: HTTPS: listening on [::]:9095\n

				
				Click to see stdout/stderrfrom junit_e2e_20200918-235906.xml

Filter through log files


openshift-tests [sig-operator][Feature:Marketplace] Marketplace resources with labels provider displayName [ocp-21728] create opsrc with labels [Serial] [Suite:openshift/conformance/serial] 1m29s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-operator\]\[Feature\:Marketplace\]\sMarketplace\sresources\swith\slabels\sprovider\sdisplayName\s\[ocp\-21728\]\screate\sopsrc\swith\slabels\s\[Serial\]\s\[Suite\:openshift\/conformance\/serial\]$'
fail [github.com/openshift/origin/test/extended/marketplace/marketplace_labels.go:97]: Unexpected error:
    <*errors.errorString | 0xc000200960>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
				
				Click to see stdout/stderrfrom junit_e2e_20200918-235906.xml

Filter through log files


operator Run template e2e-aws-serial - e2e-aws-serial container test 1h7m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=operator\sRun\stemplate\se2e\-aws\-serial\s\-\se2e\-aws\-serial\scontainer\stest$'
e available: 6 Insufficient example.com/fakePTSRes.
Sep 18 23:58:56.824 W ns/e2e-sched-preemption-162 pod/medium reason/FailedScheduling 0/6 nodes are available: 6 Insufficient example.com/fakePTSRes.
Sep 18 23:58:57.208 I ns/e2e-sched-preemption-162 pod/low-3 node/ip-10-0-247-22.us-west-1.compute.internal container/low-3 reason/Killing
Sep 18 23:58:58.217 W ns/e2e-sched-preemption-162 pod/low-2 node/ip-10-0-247-22.us-west-1.compute.internal invariant violation (bug): pod should not transition Running->Pending even when terminated
Sep 18 23:58:58.217 W ns/e2e-sched-preemption-162 pod/low-2 node/ip-10-0-247-22.us-west-1.compute.internal container/low-2 reason/NotReady
Sep 18 23:58:59.229 W ns/e2e-sched-preemption-162 pod/low-2 node/ip-10-0-247-22.us-west-1.compute.internal reason/Deleted
Sep 18 23:58:59.293 I ns/e2e-sched-preemption-162 pod/medium node/ip-10-0-247-22.us-west-1.compute.internal reason/Scheduled
Sep 18 23:59:01.764 I ns/e2e-sched-preemption-162 pod/medium reason/AddedInterface Add eth0 [10.128.2.84/23]
Sep 18 23:59:01.910 I ns/e2e-sched-preemption-162 pod/medium node/ip-10-0-247-22.us-west-1.compute.internal container/medium reason/Pulled image/k8s.gcr.io/pause:3.2
Sep 18 23:59:02.075 I ns/e2e-sched-preemption-162 pod/medium node/ip-10-0-247-22.us-west-1.compute.internal container/medium reason/Created
Sep 18 23:59:02.105 I ns/e2e-sched-preemption-162 pod/medium node/ip-10-0-247-22.us-west-1.compute.internal container/medium reason/Started
Sep 18 23:59:02.237 I ns/e2e-sched-preemption-162 pod/medium node/ip-10-0-247-22.us-west-1.compute.internal container/medium reason/Ready
Sep 18 23:59:05.106 W ns/e2e-sched-preemption-162 pod/low-3 node/ip-10-0-247-22.us-west-1.compute.internal reason/Deleted

Failing tests:

[sig-operator][Feature:Marketplace] Marketplace resources with labels provider displayName [ocp-21728] create opsrc with labels [Serial] [Suite:openshift/conformance/serial]

Writing JUnit report to /tmp/artifacts/junit/junit_e2e_20200918-235906.xml

error: 1 fail, 65 pass, 210 skip (1h7m2s)

				from junit_operator.xml

Find should mentions in log files


Show 105 Passed Tests

Show 210 Skipped Tests