ResultSUCCESS
Tests 1 failed / 54 succeeded
Started2019-11-07 23:26
Elapsed2h45m
Work namespaceci-op-sphkdrtf
pod4.2.0-0.nightly-2019-11-07-231418-azure-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h34m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
22 error level events were detected during this test run:

Nov 08 00:19:52.520 E ns/openshift-marketplace pod/redhat-operators-6bc49f9c75-hv7tv node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=redhat-operators container exited with code 2 (Error): 
Nov 08 00:19:53.525 E ns/openshift-marketplace pod/certified-operators-566fcdd998-2lr6x node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=certified-operators container exited with code 2 (Error): 
Nov 08 00:19:56.528 E ns/openshift-monitoring pod/prometheus-adapter-696f658c4d-8hl8j node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=prometheus-adapter container exited with code 2 (Error): I1108 00:03:05.610502       1 adapter.go:93] successfully using in-cluster auth\nI1108 00:03:06.815960       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 00:19:57.128 E ns/openshift-monitoring pod/kube-state-metrics-f55c697ff-gp4sj node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=kube-state-metrics container exited with code 2 (Error): 
Nov 08 00:30:37.769 E ns/openshift-marketplace pod/redhat-operators-6bc49f9c75-tlmvf node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=redhat-operators container exited with code 2 (Error): 
Nov 08 00:30:38.874 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 00:30:38.874 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 00:30:38.874 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 08 01:10:41.524 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-5tc5m container=config-reloader container exited with code 2 (Error): 
Nov 08 01:10:41.524 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-5tc5m container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 00:30:50 provider.go:109: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 00:30:50 provider.go:114: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 00:30:50 provider.go:291: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 00:30:50 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 00:30:50 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 00:30:50 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 00:30:50 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 00:30:50 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 01:10:41.596 E ns/openshift-ingress pod/router-default-8b5b7fcfc-cns2p node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-5tc5m container=router container exited with code 2 (Error): r.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 00:30:40.462753       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 00:30:49.587117       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 00:30:54.577339       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 00:30:59.873694       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nW1108 00:33:37.977306       1 reflector.go:341] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW1108 00:42:50.030852       1 reflector.go:341] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW1108 00:49:28.111270       1 reflector.go:341] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW1108 00:59:07.153761       1 reflector.go:341] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nI1108 01:09:56.687282       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:10:25.310569       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:10:30.208630       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:10:37.572464       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Nov 08 01:10:41.597 E ns/openshift-monitoring pod/telemeter-client-5cb56d6bf7-5rkzz node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-5tc5m container=telemeter-client container exited with code 2 (Error): 
Nov 08 01:10:41.597 E ns/openshift-monitoring pod/telemeter-client-5cb56d6bf7-5rkzz node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-5tc5m container=reload container exited with code 2 (Error): 
Nov 08 01:10:41.614 E ns/openshift-monitoring pod/prometheus-adapter-696f658c4d-qj7dz node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-5tc5m container=prometheus-adapter container exited with code 2 (Error): I1108 00:20:07.782820       1 adapter.go:93] successfully using in-cluster auth\nI1108 00:20:08.346941       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 08 01:10:41.912 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-5tc5m container=config-reloader container exited with code 2 (Error): 
Nov 08 01:10:41.912 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-5tc5m container=alertmanager-proxy container exited with code 2 (Error): 2019/11/08 00:05:03 provider.go:109: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 00:05:03 provider.go:114: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/08 00:05:03 provider.go:291: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/08 00:05:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/08 00:05:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/08 00:05:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/08 00:05:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/08 00:05:03 http.go:96: HTTPS: listening on [::]:9095\n
Nov 08 01:10:42.712 E ns/openshift-marketplace pod/community-operators-7569bfbb7f-xg24h node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-5tc5m container=community-operators container exited with code 2 (Error): 
Nov 08 01:11:30.480 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-rngpj container=prometheus container exited with code 1 (Error): 
Nov 08 01:14:25.438 E ns/openshift-ingress pod/router-default-8b5b7fcfc-9bgs6 node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=router container exited with code 2 (Error): es\nE1108 01:10:46.556743       1 haproxy.go:392] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory\nI1108 01:10:46.612031       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:10:46.612053       1 router.go:255] Router is including routes in all namespaces\nI1108 01:10:46.853994       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:10:51.863305       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:10:56.850224       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:11:02.411390       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:11:07.355691       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:11:13.042016       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:11:21.192825       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:11:26.186028       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:11:31.478976       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:11:36.470720       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:12:56.669949       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1108 01:14:22.606832       1 router.go:561] Router reloaded:\n - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Nov 08 01:14:25.485 E ns/openshift-monitoring pod/telemeter-client-5cb56d6bf7-lgm7s node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=reload container exited with code 2 (Error): 
Nov 08 01:14:25.485 E ns/openshift-monitoring pod/telemeter-client-5cb56d6bf7-lgm7s node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=telemeter-client container exited with code 2 (Error): 
Nov 08 01:22:06.397 E ns/openshift-image-registry pod/node-ca-87k5h node/ci-op-sphkdrtf-f91d0-h2l6x-worker-westus-8n7rg container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20191108-014944.xml

Find was mentions in log files


Show 54 Passed Tests

Show 169 Skipped Tests