ResultSUCCESS
Tests 1 failed / 54 succeeded
Started2019-11-08 01:32
Elapsed2h32m
Work namespaceci-op-92nrnl8v
pod4.2.0-0.nightly-2019-11-08-012816-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h27m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
34 error level events were detected during this test run:

Nov 08 02:29:32.578 E ns/openshift-monitoring pod/prometheus-adapter-5fd685f9c-bgt6m node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-rj4q5 container=prometheus-adapter container exited with code 2 (Error): I1108 02:19:25.031482       1 adapter.go:93] successfully using in-cluster auth\nI1108 02:19:25.775902       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 02:29:32.588 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-rj4q5 container=config-reloader container exited with code 2 (Error): 
Nov 08 02:29:32.588 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-rj4q5 container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 02:20:04 provider.go:109: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 02:20:04 provider.go:114: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 02:20:04 provider.go:291: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 02:20:04 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 02:20:04 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 02:20:04 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 02:20:04 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 02:20:04 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 02:35:34.920 E kube-apiserver Kube API started failing: Get https://api.ci-op-92nrnl8v-f91d0.ci.azure.devcluster.openshift.com:6443/api/v1/namespaces/kube-system?timeout=3s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Nov 08 02:44:37.188 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 08 02:45:48.652 E ns/openshift-monitoring pod/kube-state-metrics-f55c697ff-vg5bf node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus1-ckjb8 container=kube-state-metrics container exited with code 2 (Error): 
Nov 08 02:45:48.835 E ns/openshift-monitoring pod/prometheus-adapter-5fd685f9c-x6mmc node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus1-ckjb8 container=prometheus-adapter container exited with code 2 (Error): I1108 02:19:23.800106       1 adapter.go:93] successfully using in-cluster auth\nI1108 02:19:24.926308       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 02:45:49.046 E ns/openshift-monitoring pod/openshift-state-metrics-557985667-b7xfj node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus1-ckjb8 container=openshift-state-metrics container exited with code 2 (Error): 
Nov 08 02:45:49.885 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus1-ckjb8 container=rules-configmap-reloader container exited with code 2 (Error): 
Nov 08 02:45:49.885 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus1-ckjb8 container=prometheus-config-reloader container exited with code 2 (Error): 
Nov 08 02:45:49.885 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus1-ckjb8 container=prometheus-proxy container exited with code 2 (Error): 2019/11/08 02:14:25 provider.go:109: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 02:14:25 provider.go:114: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 02:14:25 provider.go:291: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 02:14:25 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/08 02:14:25 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 02:14:25 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/08 02:14:25 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 02:14:25 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/08 02:14:25 http.go:96: HTTPS: listening on [::]:9091\n2019/11/08 02:19:24 oauthproxy.go:774: basicauth: 10.131.0.6:50568 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 02:23:56 oauthproxy.go:774: basicauth: 10.131.0.6:51622 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/08 02:28:27 oauthproxy.go:774: basicauth: 10.131.0.6:52518 Authorization header does not start with 'Basic', skipping basic authentication\n
Nov 08 02:46:00.926 E ns/openshift-monitoring pod/openshift-state-metrics-557985667-gjg6r node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-pqnzn container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:00.926 E ns/openshift-monitoring pod/openshift-state-metrics-557985667-gjg6r node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-pqnzn container=openshift-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:00.926 E ns/openshift-monitoring pod/openshift-state-metrics-557985667-gjg6r node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-pqnzn container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:00.927 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-pqnzn container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:00.927 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-pqnzn container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:00.927 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-pqnzn container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:00.927 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-pqnzn container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:00.927 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-pqnzn container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:00.927 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-pqnzn container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:05.327 E ns/openshift-monitoring pod/openshift-state-metrics-557985667-gjg6r node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-pqnzn container=openshift-state-metrics container exited with code 2 (Error): 
Nov 08 02:46:30.555 E ns/openshift-monitoring pod/kube-state-metrics-f55c697ff-n2l9l node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus3-ktgd9 container=kube-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:30.555 E ns/openshift-monitoring pod/kube-state-metrics-f55c697ff-n2l9l node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus3-ktgd9 container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:30.555 E ns/openshift-monitoring pod/kube-state-metrics-f55c697ff-n2l9l node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus3-ktgd9 container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:30.588 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus3-ktgd9 container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:30.588 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus3-ktgd9 container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:30.588 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus3-ktgd9 container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:30.588 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus3-ktgd9 container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:30.588 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus3-ktgd9 container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:46:30.588 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus3-ktgd9 container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 02:47:02.058 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus2-rj4q5 container=prometheus container exited with code 1 (Error): 
Nov 08 02:58:12.629 E ns/openshift-monitoring pod/prometheus-adapter-5fd685f9c-gbh2b node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus1-ntqkw container=prometheus-adapter container exited with code 2 (Error): I1108 02:46:26.671820       1 adapter.go:93] successfully using in-cluster auth\nI1108 02:46:27.657601       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 02:58:14.224 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus1-ntqkw container=config-reloader container exited with code 2 (Error): 
Nov 08 02:58:14.224 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-92nrnl8v-f91d0-rcjgg-worker-centralus1-ntqkw container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 02:46:36 provider.go:109: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 02:46:36 provider.go:114: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 02:46:36 provider.go:291: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 02:46:36 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 02:46:36 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 02:46:36 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 02:46:36 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 02:46:36 http.go:96: HTTPS: listening on [::]:9095\n2019/11/08 02:46:40 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/08 02:46:43 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2019/11/08 02:46:45 reverseproxy.go:447: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n

				
				Click to see stdout/stderrfrom junit_e2e_20191108-034833.xml

Find was mentions in log files


Show 54 Passed Tests

Show 169 Skipped Tests