ResultSUCCESS
Tests 1 failed / 94 succeeded
Started2020-09-19 00:17
Elapsed1h28m
Work namespaceci-op-rrfsrg0p
pod7af85190-fa0d-11ea-a1fd-0a580a800db2
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 45m42s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
33 error level events were detected during this test run:

Sep 19 00:51:26.939 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-192-39.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T00:51:25.180Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T00:51:25.189Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T00:51:25.189Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T00:51:25.190Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T00:51:25.190Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T00:51:25.190Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T00:51:25.190Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T00:51:25.190Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T00:51:25.190Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T00:51:25.191Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T00:51:25.191Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T00:51:25.191Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T00:51:25.191Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T00:51:25.191Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T00:51:25.192Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T00:51:25.192Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 00:51:32.954 E ns/openshift-monitoring pod/thanos-querier-8679df646b-6km6b node/ip-10-0-192-39.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/09/19 00:50:08 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/19 00:50:08 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 00:50:08 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 00:50:08 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 00:50:08 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 00:50:08 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/19 00:50:08 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 00:50:08 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/19 00:50:08 http.go:107: HTTPS: listening on [::]:9091\nI0919 00:50:08.823967       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 00:50:32 oauthproxy.go:774: basicauth: 10.129.0.2:39348 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 19 00:51:34.744 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-171-38.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T00:51:33.260Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T00:51:33.268Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T00:51:33.268Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T00:51:33.269Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T00:51:33.269Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T00:51:33.269Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T00:51:33.269Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T00:51:33.269Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T00:51:33.269Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T00:51:33.269Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T00:51:33.269Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T00:51:33.269Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T00:51:33.269Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T00:51:33.269Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T00:51:33.271Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T00:51:33.271Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 00:52:02.538 E kube-apiserver failed contacting the API: Get https://api.ci-op-rrfsrg0p-c30c1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=20468&timeout=5m32s&timeoutSeconds=332&watch=true: dial tcp 54.80.144.247:6443: connect: connection refused
Sep 19 00:52:15.594 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-166-93.ec2.internal node/ip-10-0-166-93.ec2.internal container=kube-controller-manager container exited with code 255 (Error): o:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/dnses?allowWatchBookmarks=true&resourceVersion=19214&timeout=5m28s&timeoutSeconds=328&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 00:52:14.737100       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?allowWatchBookmarks=true&resourceVersion=19122&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 00:52:14.738150       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/snapshot.storage.k8s.io/v1beta1/volumesnapshotclasses?allowWatchBookmarks=true&resourceVersion=17255&timeout=9m32s&timeoutSeconds=572&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 00:52:14.739418       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/proxies?allowWatchBookmarks=true&resourceVersion=19117&timeout=8m3s&timeoutSeconds=483&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0919 00:52:14.740639       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=20882&timeout=9m37s&timeoutSeconds=577&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0919 00:52:14.853146       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nI0919 00:52:14.853264       1 pv_controller_base.go:310] Shutting down persistent volume controller\nF0919 00:52:14.853273       1 controllermanager.go:291] leaderelection lost\n
Sep 19 00:52:38.671 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-166-93.ec2.internal node/ip-10-0-166-93.ec2.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 19 00:54:24.012 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-166-93.ec2.internal node/ip-10-0-166-93.ec2.internal container=setup init container exited with code 124 (Error): ................................................................................
Sep 19 01:07:01.311 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 19 01:08:22.214 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5b485ff987-jxqmm node/ip-10-0-171-38.ec2.internal container=operator container exited with code 255 (Error): :148] Finished syncing operator at 268.269265ms\nI0919 00:58:16.844194       1 operator.go:146] Starting syncing operator at 2020-09-19 00:58:16.844189641 +0000 UTC m=+66.842059297\nI0919 00:58:16.845098       1 status_controller.go:176] clusteroperator/csi-snapshot-controller diff {"status":{"conditions":[{"lastTransitionTime":"2020-09-19T00:44:40Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-09-19T00:58:16Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-19T00:45:08Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-19T00:44:44Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0919 00:58:16.851758       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-storage-operator", Name:"csi-snapshot-controller-operator", UID:"407a2c58-33c2-4ae8-a490-c6ab908be469", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("")\nI0919 00:58:17.441550       1 operator.go:148] Finished syncing operator at 597.349339ms\nI0919 00:58:17.441634       1 operator.go:146] Starting syncing operator at 2020-09-19 00:58:17.441609894 +0000 UTC m=+67.439479794\nI0919 00:58:18.039220       1 operator.go:148] Finished syncing operator at 597.601796ms\nI0919 01:08:20.165679       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0919 01:08:20.166198       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0919 01:08:20.166406       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0919 01:08:20.166452       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0919 01:08:20.166478       1 logging_controller.go:93] Shutting down LogLevelController\nF0919 01:08:20.166575       1 builder.go:243] stopped\n
Sep 19 01:08:22.236 E ns/openshift-marketplace pod/redhat-operators-6fd4dd6ddd-hpgtm node/ip-10-0-171-38.ec2.internal container=redhat-operators container exited with code 2 (Error): 
Sep 19 01:08:22.294 E ns/openshift-monitoring pod/openshift-state-metrics-7d589b8989-c6rv8 node/ip-10-0-171-38.ec2.internal container=openshift-state-metrics container exited with code 2 (Error): 
Sep 19 01:08:22.308 E ns/openshift-marketplace pod/certified-operators-858f7cc54d-9s7rr node/ip-10-0-171-38.ec2.internal container=certified-operators container exited with code 2 (Error): 
Sep 19 01:08:22.325 E ns/openshift-marketplace pod/redhat-marketplace-689d8cd545-cxxpz node/ip-10-0-171-38.ec2.internal container=redhat-marketplace container exited with code 2 (Error): 
Sep 19 01:08:22.347 E ns/openshift-monitoring pod/prometheus-adapter-7c69c7dfc7-tc22z node/ip-10-0-171-38.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0919 00:50:09.717931       1 adapter.go:93] successfully using in-cluster auth\nI0919 00:50:11.159216       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 19 01:08:23.249 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-171-38.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/09/19 00:58:18 Watching directory: "/etc/alertmanager/config"\n
Sep 19 01:08:23.249 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-171-38.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/19 00:58:19 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 00:58:19 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 00:58:19 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 00:58:19 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 00:58:19 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 00:58:19 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 00:58:19 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0919 00:58:19.222909       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 00:58:19 http.go:107: HTTPS: listening on [::]:9095\n
Sep 19 01:08:23.364 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-171-38.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/09/19 00:50:45 Watching directory: "/etc/alertmanager/config"\n
Sep 19 01:08:23.364 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-171-38.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/19 00:50:45 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 00:50:45 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 00:50:45 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 00:50:45 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 00:50:45 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 00:50:45 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 00:50:45 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0919 00:50:45.887929       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 00:50:45 http.go:107: HTTPS: listening on [::]:9095\n
Sep 19 01:08:33.390 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-55676cb757-kbvg2 node/ip-10-0-192-39.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Sep 19 01:08:42.476 E ns/openshift-monitoring pod/grafana-67859f657d-jnff8 node/ip-10-0-192-39.ec2.internal container=grafana container exited with code 1 (Error): 
Sep 19 01:08:42.476 E ns/openshift-monitoring pod/grafana-67859f657d-jnff8 node/ip-10-0-192-39.ec2.internal container=grafana-proxy container exited with code 2 (Error): 
Sep 19 01:08:42.580 E ns/openshift-monitoring pod/thanos-querier-58f69d94c4-8g6hx node/ip-10-0-192-39.ec2.internal container=oauth-proxy container exited with code 2 (Error): /19 00:51:25 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 00:51:25 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 00:51:25 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/19 00:51:25 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 00:51:25 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/19 00:51:25 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/19 00:51:25 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0919 00:51:25.891250       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 00:51:25 http.go:107: HTTPS: listening on [::]:9091\n2020/09/19 00:55:32 oauthproxy.go:774: basicauth: 10.129.0.2:43568 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 00:56:32 oauthproxy.go:774: basicauth: 10.129.0.2:44150 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 01:00:32 oauthproxy.go:774: basicauth: 10.129.0.2:46984 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 01:01:06 oauthproxy.go:774: basicauth: 10.129.0.2:47366 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 01:04:32 oauthproxy.go:774: basicauth: 10.129.0.2:49518 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 01:05:32 oauthproxy.go:774: basicauth: 10.129.0.2:50212 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/19 01:08:32 oauthproxy.go:774: basicauth: 10.129.0.2:52664 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 19 01:08:53.852 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-152-105.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T01:08:41.908Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T01:08:41.913Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T01:08:41.914Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T01:08:41.915Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T01:08:41.915Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T01:08:41.915Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T01:08:41.915Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T01:08:41.915Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T01:08:41.915Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T01:08:41.915Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T01:08:41.915Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T01:08:41.915Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T01:08:41.915Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T01:08:41.915Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T01:08:41.916Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T01:08:41.916Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 01:09:02.614 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-206-116.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-19T01:08:59.471Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-19T01:08:59.476Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-19T01:08:59.476Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-19T01:08:59.477Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-19T01:08:59.477Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-19T01:08:59.477Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-19T01:08:59.477Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-19T01:08:59.478Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-19T01:08:59.478Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-19T01:08:59.478Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-19T01:08:59.478Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-19T01:08:59.478Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-19T01:08:59.478Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-19T01:08:59.478Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-19T01:08:59.479Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-19T01:08:59.479Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-19
Sep 19 01:09:11.158 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Sep 19 01:10:10.056 E ns/openshift-marketplace pod/community-operators-58f87b45c5-n2g8w node/ip-10-0-167-160.ec2.internal container=community-operators container exited with code 2 (Error): 
Sep 19 01:10:10.117 E ns/openshift-marketplace pod/redhat-operators-6fd4dd6ddd-4lzqp node/ip-10-0-167-160.ec2.internal container=redhat-operators container exited with code 2 (Error): 
Sep 19 01:21:49.600 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-167-160.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/09/19 01:10:14 Watching directory: "/etc/alertmanager/config"\n
Sep 19 01:21:49.600 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-167-160.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/19 01:10:15 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 01:10:15 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/19 01:10:15 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/19 01:10:15 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/19 01:10:15 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/19 01:10:15 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/19 01:10:15 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0919 01:10:15.066511       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/19 01:10:15 http.go:107: HTTPS: listening on [::]:9095\n2020/09/19 01:10:30 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2020/09/19 01:10:31 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2020/09/19 01:10:35 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n2020/09/19 01:10:36 reverseproxy.go:437: http: proxy error: dial tcp [::1]:9093: connect: connection refused\n
Sep 19 01:24:58.999 E ns/openshift-marketplace pod/samename-6d44d59d8d-rnwl9 node/ip-10-0-167-160.ec2.internal container=samename container exited with code 2 (Error): 
Sep 19 01:30:38.584 E ns/openshift-marketplace pod/csctestlabel-597f7786d5-lzxlg node/ip-10-0-167-160.ec2.internal container=csctestlabel container exited with code 2 (Error): 
Sep 19 01:30:40.591 E ns/openshift-marketplace pod/opsrctestlabel-7c65bcc8cd-5ftz6 node/ip-10-0-167-160.ec2.internal container=opsrctestlabel container exited with code 2 (Error): 
Sep 19 01:30:40.617 E ns/openshift-marketplace pod/csctestlabel-7c7bb5b45c-jjjxr node/ip-10-0-167-160.ec2.internal container=csctestlabel container exited with code 2 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20200919-013707.xml

Filter through log files


Show 94 Passed Tests

Show 193 Skipped Tests