ResultSUCCESS
Tests 1 failed / 57 succeeded
Started2020-02-13 10:39
Elapsed1h43m
Work namespaceci-op-q5k2xd7c
pod4.4.0-0.nightly-2020-02-13-103342-aws-serial
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 51m13s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
36 error level events were detected during this test run:

Feb 13 11:21:05.659 E ns/openshift-machine-config-operator pod/machine-config-daemon-qs5gz node/ip-10-0-133-25.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 13 11:21:05.696 E ns/openshift-marketplace pod/certified-operators-6469678cb4-h7mtq node/ip-10-0-133-25.ec2.internal container=certified-operators container exited with code 2 (Error): 
Feb 13 11:21:06.855 E ns/openshift-marketplace pod/community-operators-65d4fffb-nnmq2 node/ip-10-0-133-25.ec2.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:21:06.903 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-133-25.ec2.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:21:06.903 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-133-25.ec2.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:21:06.903 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-133-25.ec2.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:21:06.946 E ns/openshift-monitoring pod/grafana-7d66cf85bd-mrlk9 node/ip-10-0-133-25.ec2.internal container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:21:06.946 E ns/openshift-monitoring pod/grafana-7d66cf85bd-mrlk9 node/ip-10-0-133-25.ec2.internal container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:40:28.801 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 13 11:40:34.566 E ns/openshift-monitoring pod/thanos-querier-7c7fdc94df-84jfn node/ip-10-0-128-181.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/13 11:13:27 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/13 11:13:27 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/13 11:13:27 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/13 11:13:27 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/13 11:13:27 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/13 11:13:27 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/13 11:13:27 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/13 11:13:27 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/13 11:13:27 http.go:107: HTTPS: listening on [::]:9091\nI0213 11:13:27.208919       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 13 11:40:35.593 E ns/openshift-marketplace pod/redhat-marketplace-8d8b5b8cc-nrl6n node/ip-10-0-128-181.ec2.internal container=redhat-marketplace container exited with code 2 (Error): 
Feb 13 11:40:50.952 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-133-25.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-13T11:40:47.875Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-13T11:40:47.881Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-13T11:40:47.881Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-13T11:40:47.882Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-13T11:40:47.882Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-13T11:40:47.882Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-13T11:40:47.882Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-13T11:40:47.882Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-13T11:40:47.882Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-13T11:40:47.882Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-13T11:40:47.882Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-13T11:40:47.882Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-13T11:40:47.882Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-13T11:40:47.882Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-13T11:40:47.883Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-13T11:40:47.883Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-13
Feb 13 11:40:51.505 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-172.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/13 11:13:46 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 13 11:40:51.505 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-172.ec2.internal container=prometheus-proxy container exited with code 2 (Error): rovider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/13 11:13:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/13 11:13:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/13 11:13:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/13 11:13:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/13 11:13:46 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0213 11:13:46.547054       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/13 11:13:46 http.go:107: HTTPS: listening on [::]:9091\n2020/02/13 11:17:09 oauthproxy.go:774: basicauth: 10.128.2.10:45602 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/13 11:21:39 oauthproxy.go:774: basicauth: 10.128.2.10:49286 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/13 11:23:16 oauthproxy.go:774: basicauth: 10.129.0.3:45136 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/13 11:26:09 oauthproxy.go:774: basicauth: 10.128.2.10:53108 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/13 11:30:07 oauthproxy.go:774: basicauth: 10.129.0.3:49396 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/13 11:30:39 oauthproxy.go:774: basicauth: 10.128.2.10:56948 Authorization header does not start with 'Basic', skipping basic authentication\n202
Feb 13 11:40:51.505 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-172.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-13T11:13:45.825417004Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.35.1'."\nlevel=info ts=2020-02-13T11:13:45.825540712Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-13T11:13:45.828235289Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-13T11:13:50.980672694Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 13 11:40:51.547 E ns/openshift-ingress pod/router-default-7677b7f78b-gkgx7 node/ip-10-0-149-172.ec2.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:40:51.590 E ns/openshift-monitoring pod/prometheus-adapter-9b446bb89-66tnl node/ip-10-0-149-172.ec2.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:41:03.149 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Feb 13 11:41:37.513 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-158-106.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-13T11:41:31.949Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-13T11:41:31.964Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-13T11:41:31.964Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-13T11:41:31.966Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-13T11:41:31.966Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-13T11:41:31.966Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-13T11:41:31.966Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-13T11:41:31.966Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-13T11:41:31.966Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-13T11:41:31.966Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-13T11:41:31.966Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-13T11:41:31.966Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-13T11:41:31.966Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-13T11:41:31.967Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-13T11:41:31.970Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-13T11:41:31.970Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-13
Feb 13 11:41:50.119 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-f7d9fd77d-mf484 node/ip-10-0-133-25.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 13 11:42:26.192 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-77f78d5bdb-cwxlf node/ip-10-0-140-187.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 13 11:42:26.266 E ns/openshift-kube-storage-version-migrator pod/migrator-5578bbc4dc-wdz2f node/ip-10-0-140-187.ec2.internal container=migrator container exited with code 2 (Error): 
Feb 13 11:42:26.286 E ns/openshift-machine-config-operator pod/machine-config-daemon-kbzrv node/ip-10-0-140-187.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 13 11:42:26.303 E ns/openshift-monitoring pod/prometheus-adapter-9b446bb89-mwp85 node/ip-10-0-140-187.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0213 11:40:44.106163       1 adapter.go:93] successfully using in-cluster auth\nI0213 11:40:46.059333       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 13 11:42:26.325 E ns/openshift-monitoring pod/telemeter-client-784d4b57d5-tm6bj node/ip-10-0-140-187.ec2.internal container=reload container exited with code 2 (Error): 
Feb 13 11:42:26.325 E ns/openshift-monitoring pod/telemeter-client-784d4b57d5-tm6bj node/ip-10-0-140-187.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Feb 13 11:42:27.303 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-187.ec2.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:42:27.303 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-187.ec2.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:42:27.303 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-187.ec2.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:49:37.963 E ns/openshift-dns pod/dns-default-25hbh node/ip-10-0-140-187.ec2.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:49:37.963 E ns/openshift-dns pod/dns-default-25hbh node/ip-10-0-140-187.ec2.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 13 11:49:37.979 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-187.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/13 11:42:42 Watching directory: "/etc/alertmanager/config"\n
Feb 13 11:49:37.979 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-187.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/13 11:42:42 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/13 11:42:42 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/13 11:42:42 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/13 11:42:42 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/13 11:42:42 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/13 11:42:42 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/13 11:42:42 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/13 11:42:42 http.go:107: HTTPS: listening on [::]:9095\nI0213 11:42:42.947105       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 13 12:05:18.831 E ns/openshift-machine-config-operator pod/machine-config-daemon-kv427 node/ip-10-0-140-187.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 13 12:05:18.870 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-187.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/13 11:49:53 Watching directory: "/etc/alertmanager/config"\n
Feb 13 12:05:18.870 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-187.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/13 11:49:53 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/13 11:49:53 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/13 11:49:53 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/13 11:49:53 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/13 11:49:53 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/13 11:49:53 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/13 11:49:53 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0213 11:49:53.909902       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/13 11:49:53 http.go:107: HTTPS: listening on [::]:9095\n

				
				Click to see stdout/stderrfrom junit_e2e_20200213-120646.xml

Find was mentions in log files


Show 57 Passed Tests

Show 193 Skipped Tests