ResultFAILURE
Tests 4 failed / 12 succeeded
Started2019-04-23 06:30
Elapsed1h32m
Work namespaceci-op-xqf4k9m8
Refs
pod4.0.0-0.ci-2019-04-23-061845-upgrade
repo/
repos{u'/': u':'}

Test Failures


Cluster upgrade service-upgrade 42m48s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sservice\-upgrade$'
Apr 23 07:41:01.864: Could not reach HTTP service through a2a8f07f2659511e997f20aa58de86a0-1291630398.us-east-1.elb.amazonaws.com:80 after 2m0s

github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework.(*ServiceTestJig).TestReachableHTTPWithRetriableErrorCodes(0xc42119d940, 0xc422faf450, 0x47, 0x50, 0x7e34488, 0x0, 0x0, 0x1bf08eb000)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/service_util.go:855 +0x36f
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework.(*ServiceTestJig).TestReachableHTTP(0xc42119d940, 0xc422faf450, 0x47, 0x50, 0x1bf08eb000)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/service_util.go:847 +0x75
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/upgrades.(*ServiceUpgradeTest).test.func1()
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/upgrades/services.go:100 +0x57
github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc421d43ea0)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc421d43ea0, 0x77359400, 0x0, 0x4066201, 0xc4229fcb40)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc422e2fea0, 0x77359400, 0xc4229fcb40)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/upgrades.(*ServiceUpgradeTest).test(0xc422c06b40, 0xc42136d1e0, 0xc4229fcb40, 0xc422e5d901)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/upgrades/services.go:99 +0x8f
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/upgrades.(*ServiceUpgradeTest).Test(0xc422c06b40, 0xc42136d1e0, 0xc4229fcb40, 0x2)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/upgrades/services.go:81 +0x52
github.com/openshift/origin/test/e2e/upgrade.(*chaosMonkeyAdapter).Test(0xc422dc1140, 0xc422e5d900)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/e2e/upgrade/upgrade.go:118 +0x1eb
github.com/openshift/origin/test/e2e/upgrade.(*chaosMonkeyAdapter).Test-fm(0xc422e5d900)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/e2e/upgrade/upgrade.go:195 +0x34
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do.func1(0xc422e5d900, 0xc422e5b600)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:89 +0x76
created by github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:86 +0xa8
				from junit_upgrades.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 1h0m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
14 error level events were detected during this test run:

Apr 23 07:05:13.358 E clusteroperator/monitoring changed Failing to True: Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter SecurityContextConstraints failed: retrieving SecurityContextConstraints object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get securitycontextconstraints.security.openshift.io node-exporter)
Apr 23 07:11:14.250 E clusteroperator/monitoring changed Failing to True: Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter SecurityContextConstraints failed: retrieving SecurityContextConstraints object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get securitycontextconstraints.security.openshift.io node-exporter)
Apr 23 07:15:25.037 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator network is still updating
Apr 23 07:23:01.252 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator network is still updating
Apr 23 07:31:26.480 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator network is still updating
Apr 23 07:39:40.299 - 75s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 23 07:40:49.405 E clusteroperator/monitoring changed Failing to True: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Apr 23 07:51:54.297 E kube-apiserver Kube API started failing: Get https://api.ci-op-xqf4k9m8-77109.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 23 07:51:55.299 - 75s   E kube-apiserver Kube API is not responding to GET requests
Apr 23 07:53:29.297 E kube-apiserver Kube API started failing: Get https://api.ci-op-xqf4k9m8-77109.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=3s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 23 07:53:40.299 - 29s   E kube-apiserver Kube API is not responding to GET requests
Apr 23 07:53:40.299 - 44s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 23 07:54:55.299 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 23 07:57:25.299 - 44s   E openshift-apiserver OpenShift API is not responding to GET requests

				
				Click to see stdout/stderrfrom junit_e2e_20190423-075836.xml

Filter through log files


openshift-tests [Disruptive] Cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] [Suite:openshift] [Serial] 1h0m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Disruptive\]\sCluster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterUpgrade\]\s\[Suite\:openshift\]\s\[Serial\]$'
fail [k8s.io/kubernetes/test/e2e/framework/service_util.go:855]: Apr 23 07:41:01.864: Could not reach HTTP service through a2a8f07f2659511e997f20aa58de86a0-1291630398.us-east-1.elb.amazonaws.com:80 after 2m0s
				
				Click to see stdout/stderrfrom junit_e2e_20190423-075836.xml

Filter through log files


operator Run template e2e-aws-upgrade - e2e-aws-upgrade container test 1h0m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=operator\sRun\stemplate\se2e\-aws\-upgrade\s\-\se2e\-aws\-upgrade\scontainer\stest$'
\"kube-apiserver-cert-syncer-7\" is not ready" to ""
Apr 23 07:58:34.265 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator Changed loglevel level to "2" (3 times)
Apr 23 07:58:34.403 I ns/openshift-apiserver pod/apiserver-glrt9 Container image "registry.svc.ci.openshift.org/ocp/4.0-2019-04-23-034919@sha256:4fe923bbfdfbdb54ba6981bf948e0ce6ca04e931e2b33ac544006691528426af" already present on machine
Apr 23 07:58:34.428 I ns/openshift-kube-apiserver pod/revision-pruner-7-ip-10-0-147-83.ec2.internal node/ip-10-0-147-83.ec2.internal created
Apr 23 07:58:34.434 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator Created Pod/revision-pruner-7-ip-10-0-147-83.ec2.internal -n openshift-kube-apiserver because it was missing
Apr 23 07:58:34.591 I ns/openshift-authentication pod/openshift-authentication-79d8977dfd-p7sbr Created container openshift-authentication
Apr 23 07:58:34.790 I ns/openshift-authentication pod/openshift-authentication-79d8977dfd-p7sbr Started container openshift-authentication
Apr 23 07:58:34.835 I ns/openshift-kube-apiserver-operator deployment/kube-apiserv