ResultSUCCESS
Tests 1 failed / 56 succeeded
Started2019-11-05 23:24
Elapsed1h53m
Work namespaceci-op-viwtldrh
pod4.2.0-0.nightly-2019-11-05-231940-aws-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h14m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
55 error level events were detected during this test run:

Nov 05 23:57:28.598 E ns/openshift-ingress pod/router-default-7674f86cdc-sctng node/ip-10-0-154-137.ec2.internal container=router container exited with code 2 (Error): 1105 23:53:08.384877       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1105 23:53:15.303220       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1105 23:53:25.420654       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1105 23:53:30.409774       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1105 23:53:45.102165       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1105 23:53:50.102555       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nE1105 23:54:55.118389       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""\nE1105 23:54:55.118394       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""\nE1105 23:54:55.118392       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=25, ErrCode=NO_ERROR, debug=""\nW1105 23:54:55.442442       1 reflector.go:341] github.com/openshift/router/pkg/router/template/service_lookup.go:32: watch of *v1.Service ended with: too old resource version: 12198 (13923)\nI1105 23:55:04.047282       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1105 23:55:09.066310       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Nov 05 23:57:31.999 E ns/openshift-monitoring pod/prometheus-adapter-5957c6c8b9-9dcqq node/ip-10-0-154-137.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I1105 23:50:02.130149       1 adapter.go:93] successfully using in-cluster auth\nI1105 23:50:02.673846       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 05 23:57:35.798 E ns/openshift-console pod/downloads-6674c66cf4-pz2xn node/ip-10-0-154-137.ec2.internal container=download-server container exited with code 137 (Error): 9 23:54:39] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:54:41] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:54:49] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:54:51] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:54:59] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:01] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:09] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:11] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:19] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:21] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:29] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:31] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:39] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:41] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:49] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:51] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:55:59] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:01] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:09] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:11] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:19] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:21] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:29] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:31] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:39] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:41] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:49] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:51] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:56:59] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:57:01] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:57:09] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:57:11] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:57:19] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [05/Nov/2019 23:57:21] "GET / HTTP/1.1" 200 -\n
Nov 05 23:57:41.429 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=prometheus container exited with code 1 (Error): 
Nov 06 00:16:43.957 E ns/openshift-machine-config-operator pod/machine-config-daemon-dqsjq node/ip-10-0-154-137.ec2.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:16:44.771 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:16:44.771 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:16:44.771 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:16:44.771 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:16:44.771 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:16:44.771 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:16:48.959 E ns/openshift-console pod/downloads-6674c66cf4-h7wx5 node/ip-10-0-154-137.ec2.internal container=download-server container exited with code 137 (Error): 9 00:13:55] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:13:57] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:05] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:07] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:15] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:17] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:25] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:27] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:35] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:37] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:45] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:47] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:55] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:14:57] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:05] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:07] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:15] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:17] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:25] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:27] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:35] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:37] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:45] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:47] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:55] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:15:57] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:16:05] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:16:07] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:16:15] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:16:17] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:16:25] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:16:27] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:16:35] "GET / HTTP/1.1" 200 -\n10.128.2.1 - - [06/Nov/2019 00:16:37] "GET / HTTP/1.1" 200 -\n
Nov 06 00:16:58.325 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=prometheus container exited with code 1 (Error): 
Nov 06 00:26:01.476 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-154-137.ec2.internal container=config-reloader container exited with code 2 (Error): 
Nov 06 00:26:01.476 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-154-137.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/06 00:17:02 provider.go:109: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 00:17:02 provider.go:114: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 00:17:02 provider.go:291: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 00:17:02 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/06 00:17:02 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 00:17:02 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 00:17:02 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 00:17:02 http.go:96: HTTPS: listening on [::]:9095\n
Nov 06 00:26:01.892 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): 
Nov 06 00:26:01.892 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 
Nov 06 00:26:01.892 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 2019/11/06 00:16:57 provider.go:109: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/06 00:16:57 provider.go:114: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 00:16:57 provider.go:291: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 00:16:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/06 00:16:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 00:16:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/06 00:16:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 00:16:57 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/06 00:16:57 http.go:96: HTTPS: listening on [::]:9091\n
Nov 06 00:26:16.279 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-137.ec2.internal container=prometheus container exited with code 1 (Error): 
Nov 06 00:33:56.655 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-154-137.ec2.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:33:56.655 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-154-137.ec2.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:33:56.655 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-154-137.ec2.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:34:12.890 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-133-170.ec2.internal container=prometheus container exited with code 1 (Error): 
Nov 06 00:44:19.737 E clusteroperator/network changed Degraded to True: ApplyOperatorConfig: Error while updating operator configuration: could not apply (apps/v1, Kind=DaemonSet) openshift-sdn/sdn: could not update object (apps/v1, Kind=DaemonSet) openshift-sdn/sdn: Operation cannot be fulfilled on daemonsets.apps "sdn": the object has been modified; please apply your changes to the latest version and try again
Nov 06 00:44:32.119 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 06 00:45:07.045 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-21.ec2.internal container=config-reloader container exited with code 2 (Error): 
Nov 06 00:45:07.045 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-21.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/05 23:50:52 provider.go:109: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/05 23:50:52 provider.go:114: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/05 23:50:52 provider.go:291: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/05 23:50:52 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/05 23:50:52 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/05 23:50:52 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/05 23:50:52 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/05 23:50:52 http.go:96: HTTPS: listening on [::]:9095\n
Nov 06 00:45:07.886 E ns/openshift-monitoring pod/grafana-5d5d6cdf5-m4w2d node/ip-10-0-129-21.ec2.internal container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:07.886 E ns/openshift-monitoring pod/grafana-5d5d6cdf5-m4w2d node/ip-10-0-129-21.ec2.internal container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:09.485 E ns/openshift-monitoring pod/prometheus-adapter-5957c6c8b9-q5q6v node/ip-10-0-129-21.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I1105 23:50:02.464280       1 adapter.go:93] successfully using in-cluster auth\nI1105 23:50:03.412516       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 06 00:45:10.487 E ns/openshift-console pod/downloads-6674c66cf4-jx4r2 node/ip-10-0-129-21.ec2.internal container=download-server container exited with code 137 (Error): 9 00:42:20] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:42:26] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:42:30] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:42:36] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:42:40] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:42:46] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:42:50] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:42:56] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:00] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:06] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:10] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:16] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:20] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:26] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:30] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:36] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:40] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:46] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:50] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:43:56] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:00] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:06] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:10] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:16] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:20] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:26] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:30] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:36] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:40] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:46] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:50] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:44:56] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:45:00] "GET / HTTP/1.1" 200 -\n10.129.2.1 - - [06/Nov/2019 00:45:06] "GET / HTTP/1.1" 200 -\n
Nov 06 00:45:16.000 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-154-137.ec2.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:16.000 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-154-137.ec2.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:16.000 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-154-137.ec2.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:16.027 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-137.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:16.027 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-137.ec2.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:16.027 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-137.ec2.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:16.027 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-137.ec2.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:16.027 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-137.ec2.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:16.027 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-137.ec2.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:45:32.082 E ns/openshift-ingress pod/router-default-7674f86cdc-p5hmb node/ip-10-0-154-137.ec2.internal container=router container exited with code 2 (Error): I1106 00:45:13.245541       1 template.go:299] Starting template router (v4.2.4-201911050122)\nI1106 00:45:13.247449       1 metrics.go:147] Router health and metrics port listening at 0.0.0.0:1936 on HTTP and HTTPS\nI1106 00:45:13.253934       1 router.go:306] Watching "/etc/pki/tls/private" for changes\nE1106 00:45:13.255692       1 haproxy.go:392] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory\nI1106 00:45:13.278585       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1106 00:45:13.278618       1 router.go:255] Router is including routes in all namespaces\nI1106 00:45:13.523184       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1106 00:45:18.540987       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1106 00:45:23.520693       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1106 00:45:28.516494       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Nov 06 00:45:43.736 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-144.ec2.internal container=prometheus container exited with code 1 (Error): 
Nov 06 00:56:33.220 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-155-144.ec2.internal container=config-reloader container exited with code 2 (Error): 
Nov 06 00:56:33.220 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-155-144.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/06 00:45:36 provider.go:109: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 00:45:36 provider.go:114: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/06 00:45:36 provider.go:291: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/06 00:45:36 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/06 00:45:36 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/06 00:45:36 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/06 00:45:36 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/06 00:45:36 http.go:96: HTTPS: listening on [::]:9095\n
Nov 06 00:56:33.662 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-144.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:56:33.662 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-144.ec2.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:56:33.662 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-144.ec2.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:56:33.662 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-144.ec2.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:56:33.662 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-144.ec2.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:56:33.662 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-144.ec2.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:56:34.570 E ns/openshift-monitoring pod/grafana-5d5d6cdf5-gd5kk node/ip-10-0-135-160.ec2.internal container=grafana-proxy container exited with code 2 (Error): 
Nov 06 00:56:34.858 E ns/openshift-monitoring pod/prometheus-adapter-5957c6c8b9-w7xxp node/ip-10-0-155-144.ec2.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 06 00:56:34.970 E ns/openshift-ingress pod/router-default-7674f86cdc-5vln7 node/ip-10-0-135-160.ec2.internal container=router container exited with code 2 (Error): I1106 00:45:41.935550       1 template.go:299] Starting template router (v4.2.4-201911050122)\nI1106 00:45:41.937824       1 metrics.go:147] Router health and metrics port listening at 0.0.0.0:1936 on HTTP and HTTPS\nI1106 00:45:41.942911       1 router.go:306] Watching "/etc/pki/tls/private" for changes\nE1106 00:45:41.944337       1 haproxy.go:392] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory\nI1106 00:45:41.966237       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1106 00:45:41.966282       1 router.go:255] Router is including routes in all namespaces\nI1106 00:45:42.200844       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1106 00:45:47.201135       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1106 00:45:59.347798       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1106 00:46:04.308440       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI1106 00:47:01.774261       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nW1106 00:53:08.008424       1 reflector.go:341] github.com/openshift/router/pkg/router/controller/factory/factory.go:112: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nI1106 00:56:31.611802       1 router.go:561] Router reloaded:\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n
Nov 06 00:56:36.459 E ns/openshift-console pod/downloads-6674c66cf4-www8z node/ip-10-0-155-144.ec2.internal container=download-server container exited with code 137 (Error): 9 00:53:46] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:53:47] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:53:56] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:53:57] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:06] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:07] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:16] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:17] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:26] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:27] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:36] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:37] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:46] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:47] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:56] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:54:57] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:06] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:07] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:16] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:17] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:26] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:27] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:36] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:37] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:46] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:47] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:56] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:55:57] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:56:06] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:56:07] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:56:16] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:56:17] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:56:26] "GET / HTTP/1.1" 200 -\n10.131.2.1 - - [06/Nov/2019 00:56:27] "GET / HTTP/1.1" 200 -\n
Nov 06 00:56:50.628 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-133-170.ec2.internal container=prometheus container exited with code 1 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20191106-010936.xml

Find was mentions in log files


Show 56 Passed Tests

Show 167 Skipped Tests