ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2019-11-07 10:19
Elapsed1h37m
Work namespaceci-op-c4dx0qmn
pod4.3.0-0.ci-2019-11-07-101725-gcp-serial

Test Failures


openshift-tests Monitor cluster while tests execute 55m51s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
40 error level events were detected during this test run:

Nov 07 11:08:55.530 E ns/openshift-monitoring pod/prometheus-adapter-55c475bc9f-l2g9g node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=prometheus-adapter container exited with code 2 (Error): I1107 11:05:19.894402       1 adapter.go:93] successfully using in-cluster auth\nI1107 11:05:20.761155       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 07 11:08:55.571 E ns/openshift-machine-config-operator pod/machine-config-daemon-vjhcg node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 07 11:08:56.593 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:08:56.593 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:08:56.593 E ns/openshift-monitoring pod/alertmanager-main-1 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:14:56.409 E ns/openshift-machine-config-operator pod/machine-config-daemon-plpmf node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:14:56.409 E ns/openshift-machine-config-operator pod/machine-config-daemon-plpmf node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:18:46.762 E ns/openshift-authentication pod/oauth-openshift-5f88b9c977-hgj87 node/ci-op--qmh7k-m-2.c.openshift-gce-devel-ci.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:20:34.169 E ns/openshift-machine-config-operator pod/machine-config-daemon-ghgsg node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 07 11:28:22.084 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Nov 07 11:28:34.493 E ns/openshift-marketplace pod/community-operators-59f7ccd97-rjm4s node/ci-op--qmh7k-w-b-8jtp2.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Nov 07 11:28:34.565 E ns/openshift-marketplace pod/redhat-operators-645d75f84c-ch7sf node/ci-op--qmh7k-w-b-8jtp2.c.openshift-gce-devel-ci.internal container=redhat-operators container exited with code 2 (Error): 
Nov 07 11:28:34.666 E ns/openshift-monitoring pod/thanos-querier-587469fbb-bs94k node/ci-op--qmh7k-w-b-8jtp2.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:34.666 E ns/openshift-monitoring pod/thanos-querier-587469fbb-bs94k node/ci-op--qmh7k-w-b-8jtp2.c.openshift-gce-devel-ci.internal container=thanos-querier container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:34.666 E ns/openshift-monitoring pod/thanos-querier-587469fbb-bs94k node/ci-op--qmh7k-w-b-8jtp2.c.openshift-gce-devel-ci.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:34.666 E ns/openshift-monitoring pod/thanos-querier-587469fbb-bs94k node/ci-op--qmh7k-w-b-8jtp2.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:34.691 E ns/openshift-marketplace pod/certified-operators-6f787fc6c8-hwjld node/ci-op--qmh7k-w-b-8jtp2.c.openshift-gce-devel-ci.internal container=certified-operators container exited with code 2 (Error): 
Nov 07 11:28:35.650 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--qmh7k-w-b-8jtp2.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/07 10:44:17 Watching directory: "/etc/alertmanager/config"\n
Nov 07 11:28:35.650 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--qmh7k-w-b-8jtp2.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 10:44:17 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 10:44:17 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 10:44:17 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 10:44:17 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 10:44:17 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 10:44:17 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 10:44:17 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 10:44:17 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 11:28:44.454 E ns/openshift-monitoring pod/thanos-querier-587469fbb-j92qh node/ci-op--qmh7k-w-c-824hm.c.openshift-gce-devel-ci.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:44.454 E ns/openshift-monitoring pod/thanos-querier-587469fbb-j92qh node/ci-op--qmh7k-w-c-824hm.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:44.454 E ns/openshift-monitoring pod/thanos-querier-587469fbb-j92qh node/ci-op--qmh7k-w-c-824hm.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:44.454 E ns/openshift-monitoring pod/thanos-querier-587469fbb-j92qh node/ci-op--qmh7k-w-c-824hm.c.openshift-gce-devel-ci.internal container=thanos-querier container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:50.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:50.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:50.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:50.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:50.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:50.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:50.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:28:55.906 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 2 (Error): 2019/11/07 11:28:48 Watching directory: "/etc/alertmanager/config"\n
Nov 07 11:28:55.906 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 2 (Error): 2019/11/07 11:28:48 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 11:28:48 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/07 11:28:48 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/07 11:28:48 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/07 11:28:48 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/07 11:28:48 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/07 11:28:48 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/07 11:28:48 http.go:96: HTTPS: listening on [::]:9095\n
Nov 07 11:28:55.906 E ns/openshift-monitoring pod/alertmanager-main-2 node/ci-op--qmh7k-w-d-4l222.c.openshift-gce-devel-ci.internal container=alertmanager container exited with code 2 (Error): level=info ts=2019-11-07T11:28:47.748Z caller=main.go:217 msg="Starting Alertmanager" version="(version=0.19.0, branch=master, revision=11aa2a87c5bfb84cba76b90a3867a06a55e2605e)"\nlevel=info ts=2019-11-07T11:28:47.748Z caller=main.go:218 build_context="(go=go1.12.9, user=root@prometheus-alertmanager-build, date=20191018-14:50:12)"\n
Nov 07 11:29:09.020 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ci-op--qmh7k-w-b-kpn2t.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-07T11:29:01.605Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-07T11:29:01.609Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-07T11:29:01.609Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-07T11:29:01.610Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-07T11:29:01.610Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-07T11:29:01.611Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-07T11:29:01.611Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-07T11:29:01.611Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-07T11:29:01.611Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-07T11:29:01.611Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-07T11:29:01.611Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-07T11:29:01.611Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-07T11:29:01.611Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-07T11:29:01.611Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-07T11:29:01.613Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-07T11:29:01.613Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-07
Nov 07 11:29:17.777 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ci-op--qmh7k-w-d-fzwjs.c.openshift-gce-devel-ci.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-07T11:29:12.098Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-07T11:29:12.103Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-07T11:29:12.103Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-07T11:29:12.104Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-07T11:29:12.104Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-07T11:29:12.104Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-07T11:29:12.104Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-07T11:29:12.104Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-07T11:29:12.104Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-07T11:29:12.105Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-07T11:29:12.105Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-07T11:29:12.105Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-07T11:29:12.105Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-07T11:29:12.105Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-07T11:29:12.106Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-07T11:29:12.106Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-07
Nov 07 11:38:10.341 E ns/openshift-marketplace pod/community-operators-59f7ccd97-pjr8n node/ci-op--qmh7k-w-c-xxmbf.c.openshift-gce-devel-ci.internal container=community-operators container exited with code 2 (Error): 
Nov 07 11:42:04.087 E ns/openshift-machine-config-operator pod/machine-config-daemon-7pb6m node/ci-op--qmh7k-w-c-xxmbf.c.openshift-gce-devel-ci.internal container=oauth-proxy container exited with code 143 (Error): 
Nov 07 11:42:05.118 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op--qmh7k-w-c-xxmbf.c.openshift-gce-devel-ci.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:42:05.118 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op--qmh7k-w-c-xxmbf.c.openshift-gce-devel-ci.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 07 11:42:05.118 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op--qmh7k-w-c-xxmbf.c.openshift-gce-devel-ci.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated

				
				Click to see stdout/stderrfrom junit_e2e_20191107-114641.xml

Find was mentions in log files


Show 57 Passed Tests

Show 8 Skipped Tests