ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2020-02-12 04:38
Elapsed1h32m
Work namespaceci-op-cf8txhci
pod4.4.0-0.ci-2020-02-12-042749-gcp-serial
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 45m25s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
25 error level events were detected during this test run:

Feb 12 05:28:17.744 E ns/openshift-machine-config-operator pod/machine-config-daemon-788zk node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 12 05:28:17.744 E ns/openshift-machine-config-operator pod/machine-config-daemon-788zk node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 12 05:28:17.820 E ns/openshift-monitoring pod/telemeter-client-8665bd8c57-n62zq node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 12 05:28:17.820 E ns/openshift-monitoring pod/telemeter-client-8665bd8c57-n62zq node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=telemeter-client container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 12 05:28:17.820 E ns/openshift-monitoring pod/telemeter-client-8665bd8c57-n62zq node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=reload container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 12 05:28:17.860 E ns/openshift-dns pod/dns-default-bg8sw node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 12 05:28:17.860 E ns/openshift-dns pod/dns-default-bg8sw node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 12 05:30:55.942 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/02/12 05:28:38 Watching directory: "/etc/alertmanager/config"\n
Feb 12 05:30:55.942 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/12 05:28:38 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/12 05:28:38 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/12 05:28:38 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/12 05:28:39 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/12 05:28:39 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/12 05:28:39 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/12 05:28:39 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/12 05:28:39 http.go:96: HTTPS: listening on [::]:9095\n
Feb 12 05:30:56.008 E ns/openshift-machine-config-operator pod/machine-config-daemon-c6ltc node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 12 05:37:31.765 E ns/openshift-machine-config-operator pod/machine-config-daemon-9gsn4 node/ci-op-qv6sl-w-b-9h594.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 12 05:41:23.359 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 12 05:41:33.503 E ns/openshift-monitoring pod/thanos-querier-646cdf6768-57khc node/ci-op-qv6sl-w-d-jgc8f.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/12 05:07:57 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/12 05:07:57 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/12 05:07:57 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/12 05:07:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/12 05:07:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/12 05:07:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/12 05:07:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/12 05:07:57 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/12 05:07:57 http.go:96: HTTPS: listening on [::]:9091\n
Feb 12 05:41:33.534 E ns/openshift-kube-storage-version-migrator pod/migrator-665776b4f5-49s56 node/ci-op-qv6sl-w-d-jgc8f.c.openshift-gce-devel-ci.internal container=migrator container exited with code 2 (Error): 
Feb 12 05:41:33.577 E ns/openshift-ingress pod/router-default-6bfc689ff7-bnsdh node/ci-op-qv6sl-w-d-jgc8f.c.openshift-gce-devel-ci.internal container=router container exited with code 2 (Error): ttp://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:32:02.119062       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:32:27.850856       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:37:30.738459       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:37:35.711147       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:38:36.932943       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:38:58.572597       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:40:33.291419       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:40:38.280888       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:41:04.053859       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:41:09.057000       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0212 05:41:31.485049       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 12 05:41:33.596 E ns/openshift-monitoring pod/prometheus-adapter-55946b7897-wflbz node/ci-op-qv6sl-w-d-jgc8f.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I0212 05:07:05.299527       1 adapter.go:93] successfully using in-cluster auth\nI0212 05:07:07.157755       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 12 05:41:34.577 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-qv6sl-w-d-jgc8f.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2020/02/12 05:07:48 Watching directory: "/etc/alertmanager/config"\n
Feb 12 05:41:34.577 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-qv6sl-w-d-jgc8f.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/12 05:07:48 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/12 05:07:48 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/12 05:07:48 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/12 05:07:49 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/12 05:07:49 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/12 05:07:49 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/12 05:07:49 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/12 05:07:49 http.go:96: HTTPS: listening on [::]:9095\n
Feb 12 05:41:34.642 E ns/openshift-monitoring pod/telemeter-client-8665bd8c57-vtbb5 node/ci-op-qv6sl-w-d-jgc8f.c.openshift-gce-devel-ci.internal container=reload container exited with code 2 (Error): 
Feb 12 05:41:34.642 E ns/openshift-monitoring pod/telemeter-client-8665bd8c57-vtbb5 node/ci-op-qv6sl-w-d-jgc8f.c.openshift-gce-devel-ci.internal container=telemeter-client container exited with code 2 (Error): 
Feb 12 05:42:01.144 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Feb 12 05:42:06.362 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-qv6sl-w-b-djpvt.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-12T05:41:51.008Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-12T05:41:51.015Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-12T05:41:51.016Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-12T05:41:51.017Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-12T05:41:51.017Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-12T05:41:51.017Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-12T05:41:51.017Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-12T05:41:51.017Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-12T05:41:51.017Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-12T05:41:51.017Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-12T05:41:51.017Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-12T05:41:51.017Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-12T05:41:51.017Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-12T05:41:51.017Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-12T05:41:51.021Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-12T05:41:51.021Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-12
Feb 12 05:55:15.720 E ns/openshift-kube-storage-version-migrator pod/migrator-665776b4f5-8x8c9 node/ci-op-qv6sl-w-b-djpvt.c.openshift-gce-devel-ci.internal container=migrator container exited with code 2 (Error): 
Feb 12 05:55:40.558 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op-qv6sl-w-d-shxvb.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-12T05:55:37.677Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-12T05:55:37.683Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-12T05:55:37.683Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-12T05:55:37.684Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-12T05:55:37.684Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-12T05:55:37.684Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-12T05:55:37.684Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-12T05:55:37.684Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-12T05:55:37.684Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-12T05:55:37.684Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-12T05:55:37.684Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-12T05:55:37.684Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-12T05:55:37.684Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-12T05:55:37.684Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-12T05:55:37.686Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-12T05:55:37.686Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-12
Feb 12 05:58:36.326 E ns/openshift-machine-config-operator pod/machine-config-daemon-vxvg2 node/ci-op-qv6sl-w-b-djpvt.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20200212-055840.xml

Find was mentions in log files


Show 57 Passed Tests

Show 25 Skipped Tests