ResultFAILURE
Tests 7 failed / 872 succeeded
Started2020-02-13 11:37
Elapsed2h3m
Work namespaceci-op-bw0k04wq
pod4.3.0-0.nightly-2020-02-13-113357-azure-ovn
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 38m55s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
1 error level events were detected during this test run:

Feb 13 12:55:02.575 E ns/default pod/recycler-for-nfs-xj47t node/ci-op-bw0k04wq-adff6-b8wq4-worker-westus-9q95t pod failed (DeadlineExceeded): Pod was active on the node longer than the specified deadline

				
				Click to see stdout/stderrfrom junit_e2e_20200213-131603.xml

Find failed mentions in log files


openshift-tests [Conformance][Area:Networking][Feature:Router] The HAProxy router should set Forwarded headers appropriately [Suite:openshift/conformance/parallel/minimal] 2m1s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Conformance\]\[Area\:Networking\]\[Feature\:Router\]\sThe\sHAProxy\srouter\sshould\sset\sForwarded\sheaders\sappropriately\s\[Suite\:openshift\/conformance\/parallel\/minimal\]$'
fail [github.com/openshift/origin/test/extended/router/headers.go:183]: Feb 13 12:48:43.914: Unexpected header: '100.64.0.5' (expected 10.131.0.19); All headers: http.Header{"Accept":[]string{"*/*"}, "Forwarded":[]string{"for=100.64.0.5;host=router-headers.example.com;proto=http;proto-version=\"\""}, "User-Agent":[]string{"curl/7.61.1"}, "X-Forwarded-For":[]string{"100.64.0.5"}, "X-Forwarded-Host":[]string{"router-headers.example.com"}, "X-Forwarded-Port":[]string{"80"}, "X-Forwarded-Proto":[]string{"http"}}
				
				Click to see stdout/stderrfrom junit_e2e_20200213-131603.xml

Filter through log files


openshift-tests [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs [Suite:openshift/conformance/parallel] [Suite:k8s] 15m0s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\srolling\supdates\sand\sroll\sbacks\sof\stemplate\smodifications\swith\sPVCs\s\[Suite\:openshift\/conformance\/parallel\]\s\[Suite\:k8s\]$'
Feb 13 12:37:18.835: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-2803 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 13 12:37:20.354: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Feb 13 12:37:20.354: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 13 12:37:20.354: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 13 12:37:30.816: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:37:30.816: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:37:30.816: INFO: Waiting for Pod e2e-statefulset-2803/ss-1 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:37:40.972: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:37:40.972: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:37:40.972: INFO: Waiting for Pod e2e-statefulset-2803/ss-1 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:37:50.969: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:37:50.969: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:37:50.970: INFO: Waiting for Pod e2e-statefulset-2803/ss-1 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:38:00.980: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:38:00.980: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:38:00.980: INFO: Waiting for Pod e2e-statefulset-2803/ss-1 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:38:10.975: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:38:10.975: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:38:10.975: INFO: Waiting for Pod e2e-statefulset-2803/ss-1 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:38:20.975: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:38:20.975: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:38:30.981: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:38:30.981: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:38:40.972: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:38:40.972: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:38:50.973: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:38:50.973: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:39:00.969: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:39:00.969: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:39:10.988: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:39:10.988: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:39:20.974: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:39:20.974: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:39:30.967: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:39:30.967: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-59b79b8798 update revision ss-6d5f4b76b7
Feb 13 12:39:40.971: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:39:50.976: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:40:00.968: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:40:10.976: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:40:20.968: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:40:30.973: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:40:40.978: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:40:50.973: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:41:00.970: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:41:10.973: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:41:20.969: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:41:30.974: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:41:40.976: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:41:50.971: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:42:00.969: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:42:10.968: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:42:20.970: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:42:30.968: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:42:40.982: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:42:50.969: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:43:00.974: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:43:10.967: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:43:20.973: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:43:30.971: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:43:40.975: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:43:50.974: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:44:00.967: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
STEP: Rolling back to a previous revision
Feb 13 12:44:10.973: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-2803 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 13 12:44:14.644: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Feb 13 12:44:14.644: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 13 12:44:14.644: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 13 12:44:25.120: INFO: Updating stateful set ss
STEP: Rolling back update in reverse ordinal order
Feb 13 12:44:25.367: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-2803 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 13 12:44:26.538: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Feb 13 12:44:26.538: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 13 12:44:26.538: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 13 12:44:37.010: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:44:37.010: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798
Feb 13 12:44:37.010: INFO: Waiting for Pod e2e-statefulset-2803/ss-1 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798
Feb 13 12:44:47.167: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:44:47.167: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798
Feb 13 12:44:47.167: INFO: Waiting for Pod e2e-statefulset-2803/ss-1 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798
Feb 13 12:44:57.255: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:44:57.255: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798
Feb 13 12:45:07.179: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:45:07.179: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798
Feb 13 12:45:17.169: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:45:17.169: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798
Feb 13 12:45:27.164: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:45:27.164: INFO: Waiting for Pod e2e-statefulset-2803/ss-0 to have revision ss-6d5f4b76b7 update revision ss-59b79b8798
Feb 13 12:45:37.162: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:45:47.161: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:45:57.162: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:46:07.168: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:46:17.174: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update
Feb 13 12:46:27.161: INFO: Waiting for StatefulSet e2e-statefulset-2803/ss to complete update

---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
Feb 13 12:46:30.904: INFO: Running AfterSuite actions on all nodes
Feb 13 12:46:30.904: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-statefulset-2803" for this suite.
Feb 13 12:46:31.217: INFO: Running AfterSuite actions on node 1
				
				Click to see stdout/stderrfrom junit_e2e_20200213-131603.xml

Filter through log files


openshift-tests [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity [Suite:openshift/conformance/parallel] [Suite:k8s] 15m0s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sprovide\sbasic\sidentity\s\[Suite\:openshift\/conformance\/parallel\]\s\[Suite\:k8s\]$'
Feb 13 12:53:12.045: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 12:53:12.045: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 12:53:12.045: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Pending - Ready=false
Feb 13 12:53:22.043: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 12:53:22.043: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 12:53:22.043: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 13 12:53:22.043: INFO: Resuming stateful pod at index 2
Feb 13 12:53:22.121: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-2 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync'
Feb 13 12:53:23.397: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n"
Feb 13 12:53:23.397: INFO: stdout: ""
Feb 13 12:53:23.397: INFO: Resumed pod ss-2
STEP: Verifying statefulset mounted data directory is usable
Feb 13 12:53:23.474: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-0 -- /bin/sh -x -c ls -idlh /data'
Feb 13 12:53:24.627: INFO: stderr: "+ ls -idlh /data\n"
Feb 13 12:53:24.627: INFO: stdout: "      2 drwxr-xr-x    3 root     root        4.0K Feb 13 12:45 /data\n"
Feb 13 12:53:24.627: INFO: stdout of ls -idlh /data on ss-0:       2 drwxr-xr-x    3 root     root        4.0K Feb 13 12:45 /data

Feb 13 12:53:24.627: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-1 -- /bin/sh -x -c ls -idlh /data'
Feb 13 12:53:26.027: INFO: stderr: "+ ls -idlh /data\n"
Feb 13 12:53:26.027: INFO: stdout: "      2 drwxr-xr-x    3 root     root        4.0K Feb 13 12:48 /data\n"
Feb 13 12:53:26.027: INFO: stdout of ls -idlh /data on ss-1:       2 drwxr-xr-x    3 root     root        4.0K Feb 13 12:48 /data

Feb 13 12:53:26.027: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-2 -- /bin/sh -x -c ls -idlh /data'
Feb 13 12:53:27.299: INFO: stderr: "+ ls -idlh /data\n"
Feb 13 12:53:27.299: INFO: stdout: "      2 drwxr-xr-x    3 root     root        4.0K Feb 13 12:53 /data\n"
Feb 13 12:53:27.299: INFO: stdout of ls -idlh /data on ss-2:       2 drwxr-xr-x    3 root     root        4.0K Feb 13 12:53 /data

Feb 13 12:53:27.375: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-0 -- /bin/sh -x -c find /data'
Feb 13 12:53:28.698: INFO: stderr: "+ find /data\n"
Feb 13 12:53:28.698: INFO: stdout: "/data\n/data/statefulset-continue\n/data/lost+found\n"
Feb 13 12:53:28.698: INFO: stdout of find /data on ss-0: /data
/data/statefulset-continue
/data/lost+found

Feb 13 12:53:28.698: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-1 -- /bin/sh -x -c find /data'
Feb 13 12:53:30.481: INFO: stderr: "+ find /data\n"
Feb 13 12:53:30.481: INFO: stdout: "/data\n/data/statefulset-continue\n/data/lost+found\n"
Feb 13 12:53:30.481: INFO: stdout of find /data on ss-1: /data
/data/statefulset-continue
/data/lost+found

Feb 13 12:53:30.481: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-2 -- /bin/sh -x -c find /data'
Feb 13 12:53:32.583: INFO: stderr: "+ find /data\n"
Feb 13 12:53:32.583: INFO: stdout: "/data\n/data/statefulset-continue\n/data/lost+found\n"
Feb 13 12:53:32.583: INFO: stdout of find /data on ss-2: /data
/data/statefulset-continue
/data/lost+found

Feb 13 12:53:32.661: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-0 -- /bin/sh -x -c touch /data/1581598403397916724'
Feb 13 12:53:33.939: INFO: stderr: "+ touch /data/1581598403397916724\n"
Feb 13 12:53:33.939: INFO: stdout: ""
Feb 13 12:53:33.939: INFO: stdout of touch /data/1581598403397916724 on ss-0: 
Feb 13 12:53:33.939: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-1 -- /bin/sh -x -c touch /data/1581598403397916724'
Feb 13 12:53:35.183: INFO: stderr: "+ touch /data/1581598403397916724\n"
Feb 13 12:53:35.183: INFO: stdout: ""
Feb 13 12:53:35.183: INFO: stdout of touch /data/1581598403397916724 on ss-1: 
Feb 13 12:53:35.183: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-2 -- /bin/sh -x -c touch /data/1581598403397916724'
Feb 13 12:53:36.620: INFO: stderr: "+ touch /data/1581598403397916724\n"
Feb 13 12:53:36.620: INFO: stdout: ""
Feb 13 12:53:36.620: INFO: stdout of touch /data/1581598403397916724 on ss-2: 
STEP: Verifying statefulset provides a stable hostname for each pod
Feb 13 12:53:36.699: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-0 -- /bin/sh -x -c printf $(hostname)'
Feb 13 12:53:39.097: INFO: stderr: "+ hostname\n+ printf ss-0\n"
Feb 13 12:53:39.097: INFO: stdout: "ss-0"
Feb 13 12:53:39.097: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-1 -- /bin/sh -x -c printf $(hostname)'
Feb 13 12:53:40.270: INFO: stderr: "+ hostname\n+ printf ss-1\n"
Feb 13 12:53:40.270: INFO: stdout: "ss-1"
Feb 13 12:53:40.270: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-2 -- /bin/sh -x -c printf $(hostname)'
Feb 13 12:53:41.453: INFO: stderr: "+ hostname\n+ printf ss-2\n"
Feb 13 12:53:41.453: INFO: stdout: "ss-2"
STEP: Verifying statefulset set proper service name
Feb 13 12:53:41.453: INFO: Checking if statefulset spec.serviceName is test
STEP: Running echo $(hostname) | dd of=/data/hostname conv=fsync in all stateful pods
Feb 13 12:53:41.530: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-0 -- /bin/sh -x -c echo $(hostname) | dd of=/data/hostname conv=fsync'
Feb 13 12:53:42.850: INFO: stderr: "+ dd 'of=/data/hostname' 'conv=fsync'\n+ hostname\n+ echo ss-0\n0+1 records in\n0+1 records out\n"
Feb 13 12:53:42.850: INFO: stdout: ""
Feb 13 12:53:42.850: INFO: stdout of echo $(hostname) | dd of=/data/hostname conv=fsync on ss-0: 
Feb 13 12:53:42.850: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-1 -- /bin/sh -x -c echo $(hostname) | dd of=/data/hostname conv=fsync'
Feb 13 12:53:45.224: INFO: stderr: "+ dd 'of=/data/hostname' 'conv=fsync'\n+ hostname\n+ echo ss-1\n0+1 records in\n0+1 records out\n"
Feb 13 12:53:45.224: INFO: stdout: ""
Feb 13 12:53:45.224: INFO: stdout of echo $(hostname) | dd of=/data/hostname conv=fsync on ss-1: 
Feb 13 12:53:45.224: INFO: Running '/usr/bin/kubectl --server=https://api.ci-op-bw0k04wq-adff6.ci.azure.devcluster.openshift.com:6443 --kubeconfig=/tmp/admin.kubeconfig exec --namespace=e2e-statefulset-4091 ss-2 -- /bin/sh -x -c echo $(hostname) | dd of=/data/hostname conv=fsync'
Feb 13 12:53:46.428: INFO: stderr: "+ dd 'of=/data/hostname' 'conv=fsync'\n+ hostname\n+ echo ss-2\n0+1 records in\n0+1 records out\n"
Feb 13 12:53:46.428: INFO: stdout: ""
Feb 13 12:53:46.428: INFO: stdout of echo $(hostname) | dd of=/data/hostname conv=fsync on ss-2: 
STEP: Restarting statefulset ss
Feb 13 12:53:46.428: INFO: Scaling statefulset ss to 0
Feb 13 12:54:06.749: INFO: Waiting for statefulset status.replicas updated to 0
Feb 13 12:54:07.076: INFO: Found 1 stateful pods, waiting for 3
Feb 13 12:54:17.151: INFO: Found 1 stateful pods, waiting for 3
Feb 13 12:54:27.152: INFO: Found 1 stateful pods, waiting for 3
Feb 13 12:54:37.153: INFO: Found 1 stateful pods, waiting for 3
Feb 13 12:54:47.157: INFO: Found 1 stateful pods, waiting for 3
Feb 13 12:54:57.153: INFO: Found 1 stateful pods, waiting for 3
Feb 13 12:55:07.153: INFO: Found 1 stateful pods, waiting for 3
Feb 13 12:55:17.153: INFO: Found 1 stateful pods, waiting for 3
Feb 13 12:55:27.164: INFO: Found 1 stateful pods, waiting for 3
Feb 13 12:55:37.155: INFO: Found 1 stateful pods, waiting for 3
Feb 13 12:55:47.155: INFO: Found 1 stateful pods, waiting for 3

---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
Feb 13 12:55:48.517: INFO: Running AfterSuite actions on all nodes
Feb 13 12:55:48.517: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-statefulset-4091" for this suite.
Feb 13 12:55:48.821: INFO: Running AfterSuite actions on node 1
				
				Click to see stdout/stderrfrom junit_e2e_20200213-131603.xml

Find ss-0 mentions in log files


openshift-tests [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] volumes should store data [Suite:openshift/conformance/parallel] [Suite:k8s] 9m50s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\svolumes\sshould\sstore\sdata\s\[Suite\:openshift\/conformance\/parallel\]\s\[Suite\:k8s\]$'
fail [k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:283]: Persistent Volume pvc-ef8829bc-151b-418d-b73c-9e3e7d6e7543 not deleted by dynamic provisioner
Unexpected error:
    <*errors.errorString | 0xc0032f8fb0>: {
        s: "PersistentVolume pvc-ef8829bc-151b-418d-b73c-9e3e7d6e7543 still exists within 5m0s",
    }
    PersistentVolume pvc-ef8829bc-151b-418d-b73c-9e3e7d6e7543 still exists within 5m0s
occurred
				
				Click to see stdout/stderrfrom junit_e2e_20200213-131603.xml

Filter through log files


openshift-tests [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ext4)] volumes should store data [Suite:openshift/conformance/parallel] [Suite:k8s] 13m23s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\]\s\[Testpattern\:\sDynamic\sPV\s\(ext4\)\]\svolumes\sshould\sstore\sdata\s\[Suite\:openshift\/conformance\/parallel\]\s\[Suite\:k8s\]$'
fail [k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:564]: Feb 13 12:55:46.762: Failed to create client pod: timed out waiting for the condition
				
				Click to see stdout/stderrfrom junit_e2e_20200213-131603.xml

Filter through log files


operator Run template e2e-azure - e2e-azure-ovn container test 49m51s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=operator\sRun\stemplate\se2e\-azure\s\-\se2e\-azure\-ovn\scontainer\stest$'
r/operator.go:126: watch of *v1.ClusterOperator ended with: too old resource version: 142929 (142966)
W0213 13:11:02.952068     217 reflector.go:299] github.com/openshift/origin/pkg/monitor/operator.go:279: watch of *v1.ClusterVersion ended with: too old resource version: 142930 (142967)
W0213 13:14:09.400941     217 reflector.go:299] github.com/openshift/origin/pkg/monitor/operator.go:126: watch of *v1.ClusterOperator ended with: too old resource version: 143754 (144032)
W0213 13:14:09.731913     217 reflector.go:299] github.com/openshift/origin/pkg/monitor/operator.go:279: watch of *v1.ClusterVersion ended with: too old resource version: 142967 (144033)
W0213 13:14:15.147950     217 reflector.go:299] github.com/openshift/origin/pkg/monitor/operator.go:126: watch of *v1.ClusterOperator ended with: too old resource version: 144046 (144057)
W0213 13:14:15.449380     217 reflector.go:299] github.com/openshift/origin/pkg/monitor/operator.go:279: watch of *v1.ClusterVersion ended with: too old resource version: 144033 (144060)
Flaky tests:

[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] volumes should store data [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ext4)] volumes should store data [Suite:openshift/conformance/parallel] [Suite:k8s]

Failing tests:

[Conformance][Area:Networking][Feature:Router] The HAProxy router should set Forwarded headers appropriately [Suite:openshift/conformance/parallel/minimal]

Writing JUnit report to /tmp/artifacts/junit/junit_e2e_20200213-131603.xml

error: 5 fail, 867 pass, 1217 skip (38m55s)

				from junit_operator.xml

Filter through log files


Show 872 Passed Tests

Show 1217 Skipped Tests