ResultSUCCESS
Tests 45 failed / 877 succeeded
Started2020-09-18 02:00
Elapsed6h17m
Work namespaceci-op-n8dt0k91
pod7f208b14-f952-11ea-ad59-0a580a810c37
repos{u'openshift/ibm-roks-toolkit': u'release-4.4'}
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 4h35m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
69 error level events were detected during this test run:

Sep 18 03:36:35.764 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator ingress is reporting a failure: Some ingresscontrollers are degraded: default
Sep 18 03:48:14.200 E kube-apiserver Kube API started failing: Get https://gatetestd12-224397879d6b490d1c67ac6f9ba76252-0001.us-south.containers.appdomain.cloud:30690/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 18 03:51:40.201 E ns/default pod/recycler-for-nfs-wpr47 node/10.221.250.246 pod failed (DeadlineExceeded): Pod was active on the node longer than the specified deadline
Sep 18 06:27:17.600 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Sep 18 07:22:00.336 E ns/openshift-kube-storage-version-migrator pod/migrator-77f866f5b9-99dvk node/10.221.250.251 container=migrator container exited with code 2 (Error): I0918 03:18:35.647664       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Sep 18 07:22:01.382 E ns/openshift-monitoring pod/telemeter-client-b7d6dc4b7-kptdd node/10.221.250.251 container=telemeter-client container exited with code 2 (Error): 
Sep 18 07:22:01.382 E ns/openshift-monitoring pod/telemeter-client-b7d6dc4b7-kptdd node/10.221.250.251 container=reload container exited with code 2 (Error): 
Sep 18 07:22:01.427 E ns/openshift-monitoring pod/thanos-querier-74695f558-nf4ww node/10.221.250.251 container=oauth-proxy container exited with code 2 (Error): 2020/09/18 03:25:33 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/18 03:25:33 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 03:25:33 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 03:25:33 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 03:25:33 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 03:25:33 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/18 03:25:33 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 03:25:33 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0918 03:25:33.550065       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 03:25:33 http.go:107: HTTPS: listening on [::]:9091\n2020/09/18 04:23:06 server.go:3055: http: TLS handshake error from 172.29.117.38:55706: read tcp 172.29.68.224:9091->172.29.117.38:55706: read: connection reset by peer\n2020/09/18 05:12:11 server.go:3055: http: TLS handshake error from 172.29.117.38:47538: read tcp 172.29.68.224:9091->172.29.117.38:47538: read: connection reset by peer\n2020/09/18 05:37:51 server.go:3055: http: TLS handshake error from 172.29.117.38:37802: EOF\n2020/09/18 05:38:47 server.go:3055: http: TLS handshake error from 172.29.117.38:43046: EOF\n
Sep 18 07:22:02.409 E ns/openshift-marketplace pod/redhat-marketplace-6b468b8dc6-2wmv9 node/10.221.250.251 container=redhat-marketplace container exited with code 2 (Error): 
Sep 18 07:22:03.356 E ns/openshift-monitoring pod/kube-state-metrics-7c4d66c6d-4fbvl node/10.221.250.251 container=kube-state-metrics container exited with code 2 (Error): 
Sep 18 07:22:04.365 E ns/openshift-monitoring pod/openshift-state-metrics-8fd78f746-p7bqz node/10.221.250.251 container=openshift-state-metrics container exited with code 2 (Error): 
Sep 18 07:22:04.397 E ns/openshift-service-ca pod/service-ca-688cc6755c-jp22t node/10.221.250.251 container=service-ca-controller container exited with code 255 (Error): 
Sep 18 07:22:04.437 E ns/openshift-monitoring pod/grafana-67c5566b99-pmvzb node/10.221.250.251 container=grafana container exited with code 1 (Error): 
Sep 18 07:22:04.437 E ns/openshift-monitoring pod/grafana-67c5566b99-pmvzb node/10.221.250.251 container=grafana-proxy container exited with code 2 (Error): 
Sep 18 07:22:05.361 E ns/openshift-monitoring pod/alertmanager-main-0 node/10.221.250.251 container=config-reloader container exited with code 2 (Error): 2020/09/18 03:25:07 Watching directory: "/etc/alertmanager/config"\n
Sep 18 07:22:05.361 E ns/openshift-monitoring pod/alertmanager-main-0 node/10.221.250.251 container=alertmanager-proxy container exited with code 2 (Error): 2020/09/18 03:25:07 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 03:25:07 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 03:25:07 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 03:25:07 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/18 03:25:07 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 03:25:07 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 03:25:07 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0918 03:25:07.650555       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 03:25:07 http.go:107: HTTPS: listening on [::]:9095\n2020/09/18 05:49:27 server.go:3055: http: TLS handshake error from 172.29.117.38:39842: read tcp 172.29.68.220:9095->172.29.117.38:39842: read: connection reset by peer\n
Sep 18 07:22:05.398 E ns/openshift-monitoring pod/prometheus-adapter-8bdb6c96-rh74p node/10.221.250.251 container=prometheus-adapter container exited with code 2 (Error): I0918 03:25:35.698591       1 adapter.go:93] successfully using in-cluster auth\nI0918 03:25:36.294282       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 18 07:22:05.454 E ns/openshift-monitoring pod/prometheus-k8s-1 node/10.221.250.251 container=rules-configmap-reloader container exited with code 2 (Error): 2020/09/18 03:25:48 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/09/18 03:28:30 config map updated\n2020/09/18 03:28:30 successfully triggered reload\n
Sep 18 07:22:05.454 E ns/openshift-monitoring pod/prometheus-k8s-1 node/10.221.250.251 container=prometheus-proxy container exited with code 2 (Error): :49 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/18 03:25:49 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 03:25:49 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 03:25:49 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 03:25:49 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 03:25:49 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/18 03:25:49 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 03:25:49 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/18 03:25:49 http.go:107: HTTPS: listening on [::]:9091\nI0918 03:25:49.466090       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 03:36:22 oauthproxy.go:774: basicauth: 172.29.68.229:58146 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 04:14:35 server.go:3055: http: TLS handshake error from 172.29.117.38:57916: read tcp 172.29.68.226:9091->172.29.117.38:57916: read: connection reset by peer\n2020/09/18 04:37:48 oauthproxy.go:774: basicauth: 172.29.68.236:49624 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 04:39:02 oauthproxy.go:774: basicauth: 172.29.68.236:51586 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09
Sep 18 07:22:05.454 E ns/openshift-monitoring pod/prometheus-k8s-1 node/10.221.250.251 container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-18T03:25:48.44440896Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-09-18T03:25:48.444615237Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-09-18T03:25:48.446949542Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-18T03:25:53.585009719Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 18 07:22:23.627 E ns/openshift-monitoring pod/prometheus-k8s-1 node/10.221.250.246 container=prometheus container exited with code 1 (Error): aller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-18T07:22:18.938Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-18T07:22:18.942Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-18T07:22:18.943Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-18T07:22:18.944Z caller=main.go:663 fs_type=EXT4_SUPER_MAGIC\nlevel=info ts=2020-09-18T07:22:18.944Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-18T07:22:18.944Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-18T07:22:18.944Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-18T07:22:18.944Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-18T07:22:18.944Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-18T07:22:18.944Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-18T07:22:18.944Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-18T07:22:18.944Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-18T07:22:18.944Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-18T07:22:18.944Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-18T07:22:18.946Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-18T07:22:18.946Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-18
Sep 18 07:44:55.346 E ns/openshift-console-operator pod/console-operator-69cdbff45c-sf5tb node/10.221.250.246 container=console-operator container exited with code 255 (Error): :29:47Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-09-18T07:21:59Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-09-18T03:17:54Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0918 07:22:00.039765       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"8e075efa-9bb2-4515-929d-645d5e5f64a8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from False to True ("")\nI0918 07:44:44.188240       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 07:44:44.189440       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0918 07:44:44.189473       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0918 07:44:44.189496       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0918 07:44:44.189514       1 controller.go:70] Shutting down Console\nI0918 07:44:44.189529       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0918 07:44:44.189583       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0918 07:44:44.189589       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0918 07:44:44.189606       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0918 07:44:44.189687       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0918 07:44:44.189698       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0918 07:44:44.189729       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0918 07:44:44.189737       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nF0918 07:44:44.189830       1 builder.go:243] stopped\n
Sep 18 07:44:55.366 E ns/kube-system pod/ibmcloud-block-storage-plugin-68d5c65db9-w6x4g node/10.221.250.246 container=ibmcloud-block-storage-plugin-container container exited with code 2 (Error): 
Sep 18 07:44:55.442 E ns/openshift-monitoring pod/prometheus-k8s-1 node/10.221.250.246 container=rules-configmap-reloader container exited with code 2 (Error): 2020/09/18 07:22:20 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 18 07:44:55.442 E ns/openshift-monitoring pod/prometheus-k8s-1 node/10.221.250.246 container=prometheus-proxy container exited with code 2 (Error): 2020/09/18 07:22:21 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/18 07:22:21 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 07:22:21 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 07:22:21 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 07:22:21 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 07:22:21 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/18 07:22:21 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 07:22:21 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0918 07:22:21.852693       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 07:22:21 http.go:107: HTTPS: listening on [::]:9091\n
Sep 18 07:44:55.442 E ns/openshift-monitoring pod/prometheus-k8s-1 node/10.221.250.246 container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-18T07:22:19.254689453Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-09-18T07:22:19.254953613Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-09-18T07:22:19.258036336Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-18T07:22:24.385489606Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 18 07:44:55.470 E ns/openshift-monitoring pod/prometheus-adapter-8bdb6c96-kxnxf node/10.221.250.246 container=prometheus-adapter container exited with code 2 (Error): I0918 07:21:51.654633       1 adapter.go:93] successfully using in-cluster auth\nI0918 07:21:52.534367       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 18 07:44:56.393 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-66d5ff6df8-4mwtd node/10.221.250.246 container=kube-storage-version-migrator-operator container exited with code 255 (Error): oMigratorPod","message":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-09-18T03:17:28Z","reason":"NoData"}],"versions":[{"name":"operator","version":"4.4.0-0.ci-2020-09-12-084837"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0918 03:17:32.791945       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"4eb3d28d-3c92-4ad2-bdc2-d3ce104b1dfc", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0918 03:18:52.959303       1 observer_polling.go:105] Observed file "/var/run/secrets/serving-cert/tls.crt" has been created (hash="81d794c462f541f09acc07f91ce956ddf637d52d157f4c7496aac0b3c65a2153")\nW0918 03:18:52.960617       1 builder.go:101] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created\nI0918 03:18:52.960698       1 observer_polling.go:105] Observed file "/var/run/secrets/serving-cert/tls.key" has been created (hash="922b69a2bd7add8d6a3c3c624c25776f2359d8f25e0f538322aaf45cdc831a6d")\nF0918 03:18:52.960766       1 leaderelection.go:66] leaderelection lost\n
Sep 18 07:44:56.475 E ns/openshift-monitoring pod/alertmanager-main-2 node/10.221.250.246 container=config-reloader container exited with code 2 (Error): 2020/09/18 03:24:28 Watching directory: "/etc/alertmanager/config"\n
Sep 18 07:44:56.475 E ns/openshift-monitoring pod/alertmanager-main-2 node/10.221.250.246 container=alertmanager-proxy container exited with code 2 (Error): 2020/09/18 03:24:29 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 03:24:29 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 03:24:29 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 03:24:29 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/18 03:24:29 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 03:24:29 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 03:24:29 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 03:24:29 http.go:107: HTTPS: listening on [::]:9095\nI0918 03:24:29.077867       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 04:34:32 server.go:3055: http: TLS handshake error from 172.29.68.218:51214: read tcp 172.29.117.36:9095->172.29.68.218:51214: read: connection reset by peer\n
Sep 18 07:44:56.555 E ns/openshift-marketplace pod/redhat-marketplace-6b468b8dc6-nblh9 node/10.221.250.246 container=redhat-marketplace container exited with code 2 (Error): 
Sep 18 07:44:56.619 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-58b8954db8-cfjhz node/10.221.250.246 container=operator container exited with code 255 (Error): .117.53:43998]\nI0918 07:43:58.456895       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 07:44:08.484951       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 07:44:18.527822       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 07:44:22.894362       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0918 07:44:22.894405       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0918 07:44:22.897184       1 httplog.go:90] GET /metrics: (19.298243ms) 200 [Prometheus/2.15.2 172.29.117.42:58050]\nI0918 07:44:27.122469       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0918 07:44:27.122513       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0918 07:44:27.126200       1 httplog.go:90] GET /metrics: (3.912475ms) 200 [Prometheus/2.15.2 172.29.117.53:43998]\nI0918 07:44:28.564536       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 07:44:30.832211       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.Secret total 1 items received\nI0918 07:44:38.610397       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0918 07:44:44.233224       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 07:44:44.233725       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0918 07:44:44.233769       1 builder.go:209] server exited\n
Sep 18 07:44:57.514 E ns/openshift-monitoring pod/thanos-querier-74695f558-j25r2 node/10.221.250.246 container=oauth-proxy container exited with code 2 (Error): 2020/09/18 07:21:52 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/18 07:21:52 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 07:21:52 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 07:21:52 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 07:21:52 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 07:21:52 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/18 07:21:52 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 07:21:52 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/18 07:21:52 http.go:107: HTTPS: listening on [::]:9091\nI0918 07:21:52.960227       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Sep 18 07:44:57.534 E ns/openshift-monitoring pod/prometheus-adapter-8bdb6c96-v2kfv node/10.221.250.246 container=prometheus-adapter container exited with code 2 (Error): I0918 03:25:35.222727       1 adapter.go:93] successfully using in-cluster auth\nI0918 03:25:35.810710       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 18 07:44:57.634 E ns/openshift-service-ca pod/service-ca-688cc6755c-pwx4m node/10.221.250.246 container=service-ca-controller container exited with code 255 (Error): 
Sep 18 07:44:57.705 E ns/openshift-monitoring pod/prometheus-k8s-0 node/10.221.250.246 container=prometheus-proxy container exited with code 2 (Error): 2.29.68.218:40870 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 06:52:53 oauthproxy.go:774: basicauth: 172.29.68.214:55324 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 06:57:23 oauthproxy.go:774: basicauth: 172.29.68.214:36606 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 07:01:53 oauthproxy.go:774: basicauth: 172.29.68.214:49176 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 07:06:23 oauthproxy.go:774: basicauth: 172.29.68.214:34498 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 07:10:54 oauthproxy.go:774: basicauth: 172.29.68.214:40548 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 07:15:24 oauthproxy.go:774: basicauth: 172.29.68.214:47970 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 07:19:54 oauthproxy.go:774: basicauth: 172.29.68.214:34630 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 07:21:55 oauthproxy.go:774: basicauth: 172.29.117.47:36920 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 07:26:26 oauthproxy.go:774: basicauth: 172.29.117.47:33530 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 07:30:56 oauthproxy.go:774: basicauth: 172.29.117.47:47616 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/18 07:35:26 oauthproxy.go:774: basicauth: 172.29.117.47:43948 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09
Sep 18 07:44:57.705 E ns/openshift-monitoring pod/prometheus-k8s-0 node/10.221.250.246 container=rules-configmap-reloader container exited with code 2 (Error): 2020/09/18 03:26:01 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/09/18 03:28:48 config map updated\n2020/09/18 03:28:48 successfully triggered reload\n
Sep 18 07:44:57.705 E ns/openshift-monitoring pod/prometheus-k8s-0 node/10.221.250.246 container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-18T03:26:01.175503187Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-09-18T03:26:01.175702195Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-09-18T03:26:01.17836769Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-18T03:26:06.311074719Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 18 07:44:58.366 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-59b8vxc9v node/10.221.250.246 container=operator container exited with code 255 (Error):  [Prometheus/2.15.2 172.29.117.53:44656]\nI0918 07:42:31.368037       1 httplog.go:90] GET /metrics: (19.689137ms) 200 [Prometheus/2.15.2 172.29.117.42:36476]\nI0918 07:42:38.928535       1 httplog.go:90] GET /metrics: (2.760064ms) 200 [Prometheus/2.15.2 172.29.117.53:44656]\nI0918 07:43:01.364633       1 httplog.go:90] GET /metrics: (16.286992ms) 200 [Prometheus/2.15.2 172.29.117.42:36476]\nI0918 07:43:04.749849       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.Namespace total 153 items received\nI0918 07:43:08.928344       1 httplog.go:90] GET /metrics: (2.628516ms) 200 [Prometheus/2.15.2 172.29.117.53:44656]\nI0918 07:43:31.363901       1 httplog.go:90] GET /metrics: (15.628184ms) 200 [Prometheus/2.15.2 172.29.117.42:36476]\nI0918 07:43:38.928289       1 httplog.go:90] GET /metrics: (2.600295ms) 200 [Prometheus/2.15.2 172.29.117.53:44656]\nI0918 07:43:47.508946       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.ConfigMap total 1 items received\nI0918 07:44:01.360922       1 httplog.go:90] GET /metrics: (12.637943ms) 200 [Prometheus/2.15.2 172.29.117.42:36476]\nI0918 07:44:08.928437       1 httplog.go:90] GET /metrics: (2.685632ms) 200 [Prometheus/2.15.2 172.29.117.53:44656]\nI0918 07:44:31.362980       1 httplog.go:90] GET /metrics: (14.79373ms) 200 [Prometheus/2.15.2 172.29.117.42:36476]\nI0918 07:44:38.928331       1 httplog.go:90] GET /metrics: (2.566152ms) 200 [Prometheus/2.15.2 172.29.117.53:44656]\nI0918 07:44:44.683936       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0918 07:44:44.684102       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0918 07:44:44.684535       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-controller-manager\nI0918 07:44:44.684689       1 operator.go:227] Shutting down ServiceCatalogControllerManagerOperator\nF0918 07:44:44.684716       1 builder.go:243] stopped\nF0918 07:44:44.685920       1 builder.go:210] server exited\n
Sep 18 07:44:58.394 E ns/kube-system pod/ibm-file-plugin-75bc676fb4-qklmg node/10.221.250.246 container=ibm-file-plugin-container container exited with code 2 (Error): 
Sep 18 07:44:58.547 E ns/openshift-monitoring pod/kube-state-metrics-7c4d66c6d-vddm6 node/10.221.250.246 container=kube-state-metrics container exited with code 2 (Error): 
Sep 18 07:44:58.581 E ns/openshift-marketplace pod/certified-operators-6458fdf589-72bpv node/10.221.250.246 container=certified-operators container exited with code 2 (Error): 
Sep 18 07:44:58.612 E ns/kube-system pod/ibm-storage-watcher-767f89b9b4-7j9sz node/10.221.250.246 container=ibm-storage-watcher-container container exited with code 2 (Error): 
Sep 18 07:44:58.697 E ns/openshift-service-ca-operator pod/service-ca-operator-768d95956-kx6pc node/10.221.250.246 container=operator container exited with code 255 (Error): 
Sep 18 07:44:59.568 E ns/openshift-monitoring pod/alertmanager-main-1 node/10.221.250.246 container=config-reloader container exited with code 2 (Error): 2020/09/18 03:24:41 Watching directory: "/etc/alertmanager/config"\n
Sep 18 07:44:59.568 E ns/openshift-monitoring pod/alertmanager-main-1 node/10.221.250.246 container=alertmanager-proxy container exited with code 2 (Error): 2020/09/18 03:24:41 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 03:24:41 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 03:24:41 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 03:24:41 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/18 03:24:41 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 03:24:41 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 03:24:41 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0918 03:24:41.536034       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 03:24:41 http.go:107: HTTPS: listening on [::]:9095\n2020/09/18 05:11:57 server.go:3055: http: TLS handshake error from 172.29.68.218:43816: read tcp 172.29.117.37:9095->172.29.68.218:43816: read: connection reset by peer\n
Sep 18 07:44:59.632 E ns/openshift-monitoring pod/openshift-state-metrics-8fd78f746-wf6m4 node/10.221.250.246 container=openshift-state-metrics container exited with code 2 (Error): 
Sep 18 07:44:59.701 E ns/openshift-marketplace pod/community-operators-84d96d845f-wj7fc node/10.221.250.246 container=community-operators container exited with code 2 (Error): 
Sep 18 07:44:59.731 E ns/openshift-marketplace pod/redhat-operators-768d7c64bc-m6gfn node/10.221.250.246 container=redhat-operators container exited with code 2 (Error): 
Sep 18 07:44:59.762 E ns/openshift-monitoring pod/alertmanager-main-0 node/10.221.250.246 container=config-reloader container exited with code 2 (Error): 2020/09/18 07:22:28 Watching directory: "/etc/alertmanager/config"\n
Sep 18 07:44:59.762 E ns/openshift-monitoring pod/alertmanager-main-0 node/10.221.250.246 container=alertmanager-proxy container exited with code 2 (Error): 2020/09/18 07:22:29 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 07:22:29 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 07:22:29 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 07:22:29 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/18 07:22:29 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 07:22:29 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 07:22:29 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0918 07:22:29.316144       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 07:22:29 http.go:107: HTTPS: listening on [::]:9095\n
Sep 18 07:44:59.844 E ns/openshift-monitoring pod/telemeter-client-b7d6dc4b7-l8ljf node/10.221.250.246 container=telemeter-client container exited with code 2 (Error): 
Sep 18 07:44:59.844 E ns/openshift-monitoring pod/telemeter-client-b7d6dc4b7-l8ljf node/10.221.250.246 container=reload container exited with code 2 (Error): 
Sep 18 07:45:00.042 E ns/openshift-monitoring pod/thanos-querier-74695f558-glkr4 node/10.221.250.246 container=oauth-proxy container exited with code 2 (Error): 2020/09/18 03:25:45 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/18 03:25:45 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 03:25:45 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 03:25:45 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 03:25:45 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 03:25:45 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/18 03:25:45 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 03:25:45 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/18 03:25:45 http.go:107: HTTPS: listening on [::]:9091\nI0918 03:25:45.447757       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 05:32:41 server.go:3055: http: TLS handshake error from 172.29.117.38:50058: read tcp 172.29.117.41:9091->172.29.117.38:50058: read: connection reset by peer\n2020/09/18 06:44:33 server.go:3055: http: TLS handshake error from 172.29.68.218:55240: read tcp 172.29.117.41:9091->172.29.68.218:55240: read: connection reset by peer\n
Sep 18 07:45:00.435 E ns/openshift-monitoring pod/grafana-67c5566b99-zp4k9 node/10.221.250.246 container=grafana container exited with code 1 (Error): 
Sep 18 07:45:00.435 E ns/openshift-monitoring pod/grafana-67c5566b99-zp4k9 node/10.221.250.246 container=grafana-proxy container exited with code 2 (Error): 
Sep 18 07:45:11.573 E ns/openshift-console pod/console-f948949cb-88t6g node/10.221.250.246 container=console container exited with code 2 (Error): 2020-09-18T07:21:51Z cmd/main: cookies are secure!\n2020-09-18T07:21:51Z cmd/main: Binding to [::]:8443...\n2020-09-18T07:21:51Z cmd/main: using TLS\n
Sep 18 07:45:20.801 E ns/openshift-monitoring pod/prometheus-k8s-1 node/10.221.250.251 container=prometheus container exited with code 1 (Error): aller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-18T07:45:13.631Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-18T07:45:13.636Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-18T07:45:13.637Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-18T07:45:13.640Z caller=main.go:663 fs_type=EXT4_SUPER_MAGIC\nlevel=info ts=2020-09-18T07:45:13.640Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-18T07:45:13.640Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-18T07:45:13.640Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-18T07:45:13.641Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-18T07:45:13.641Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-18T07:45:13.641Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-18T07:45:13.641Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-18T07:45:13.641Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-18T07:45:13.641Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-18T07:45:13.641Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-18T07:45:13.642Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-18T07:45:13.642Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-18
Sep 18 07:45:45.746 E ns/openshift-monitoring pod/prometheus-k8s-0 node/10.221.250.246 container=prometheus container exited with code 1 (Error): aller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-18T07:45:43.432Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-18T07:45:43.437Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-18T07:45:43.437Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-18T07:45:43.439Z caller=main.go:663 fs_type=EXT4_SUPER_MAGIC\nlevel=info ts=2020-09-18T07:45:43.439Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-18T07:45:43.439Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-18T07:45:43.439Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-18T07:45:43.439Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-18T07:45:43.439Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-18T07:45:43.439Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-18T07:45:43.439Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-18T07:45:43.439Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-18T07:45:43.439Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-18T07:45:43.439Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-18T07:45:43.440Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-18T07:45:43.440Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-18
Sep 18 07:48:47.441 E ns/openshift-monitoring pod/alertmanager-main-0 node/10.221.250.246 container=config-reloader container exited with code 2 (Error): 2020/09/18 07:45:42 Watching directory: "/etc/alertmanager/config"\n
Sep 18 07:48:47.441 E ns/openshift-monitoring pod/alertmanager-main-0 node/10.221.250.246 container=alertmanager-proxy container exited with code 2 (Error): 2020/09/18 07:45:43 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 07:45:43 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 07:45:43 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 07:45:43 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/18 07:45:43 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 07:45:43 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/18 07:45:43 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0918 07:45:43.432857       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 07:45:43 http.go:107: HTTPS: listening on [::]:9095\n
Sep 18 07:48:47.495 E ns/openshift-marketplace pod/redhat-operators-85d4fc767f-mml8n node/10.221.250.246 container=redhat-operators container exited with code 2 (Error): 
Sep 18 07:48:47.551 E ns/openshift-marketplace pod/certified-operators-584977d5cf-ffshf node/10.221.250.246 container=certified-operators container exited with code 2 (Error): 
Sep 18 07:48:47.641 E ns/openshift-monitoring pod/prometheus-k8s-0 node/10.221.250.246 container=rules-configmap-reloader container exited with code 2 (Error): 2020/09/18 07:45:43 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 18 07:48:47.641 E ns/openshift-monitoring pod/prometheus-k8s-0 node/10.221.250.246 container=prometheus-proxy container exited with code 2 (Error): 2020/09/18 07:45:44 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/18 07:45:44 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/18 07:45:44 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/18 07:45:44 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/18 07:45:44 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/18 07:45:44 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/18 07:45:44 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/18 07:45:44 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/18 07:45:44 http.go:107: HTTPS: listening on [::]:9091\nI0918 07:45:44.633111       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/09/18 07:46:15 oauthproxy.go:774: basicauth: 172.29.68.205:41000 Authorization header does not start with 'Basic', skipping basic authentication\n
Sep 18 07:48:47.641 E ns/openshift-monitoring pod/prometheus-k8s-0 node/10.221.250.246 container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-18T07:45:43.669842489Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-09-18T07:45:43.670088094Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-09-18T07:45:43.672826352Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-18T07:45:48.818724199Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 18 07:48:47.681 E ns/openshift-marketplace pod/redhat-marketplace-55bfd87db5-9bn65 node/10.221.250.246 container=redhat-marketplace container exited with code 2 (Error): 
Sep 18 07:49:06.648 E ns/openshift-monitoring pod/prometheus-k8s-0 node/10.221.250.251 container=prometheus container exited with code 1 (Error): aller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-09-18T07:49:02.352Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-18T07:49:02.368Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-18T07:49:02.369Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-18T07:49:02.371Z caller=main.go:663 fs_type=EXT4_SUPER_MAGIC\nlevel=info ts=2020-09-18T07:49:02.371Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-09-18T07:49:02.371Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-18T07:49:02.371Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-18T07:49:02.371Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-18T07:49:02.371Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-18T07:49:02.371Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-18T07:49:02.371Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-09-18T07:49:02.372Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-18T07:49:02.372Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-18T07:49:02.371Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-18T07:49:02.375Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-18T07:49:02.375Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-09-18
Sep 18 08:08:43.702 E ns/openshift-multus pod/multus-admission-controller-njkn2 node/10.221.250.246 container=multus-admission-controller container exited with code 137 (Error): 

				
				Click to see stdout/stderrfrom junit_e2e_20200918-081232.xml

Find failed mentions in log files


openshift-tests [Area:Networking][endpoints] admission [Top Level] [Area:Networking][endpoints] admission TestEndpointAdmission [Suite:openshift/conformance/parallel] 2m26s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Area\:Networking\]\[endpoints\]\sadmission\s\[Top\sLevel\]\s\[Area\:Networking\]\[endpoints\]\sadmission\sTestEndpointAdmission\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/onsi/ginkgo/internal/leafnodes/runner.go:113]: unexpected success modifying endpoint
				
				Click to see stdout/stderrfrom junit_e2e_20200918-081232.xml

Filter through log files


openshift-tests [Feature:DeploymentConfig] deploymentconfigs when run iteratively [Conformance] [Top Level] [Feature:DeploymentConfig] deploymentconfigs when run iteratively [Conformance] should only deploy the last deployment [Suite:openshift/conformance/parallel/minimal] 15m1s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:DeploymentConfig\]\sdeploymentconfigs\swhen\srun\siteratively\s\[Conformance\]\s\[Top\sLevel\]\s\[Feature\:DeploymentConfig\]\sdeploymentconfigs\swhen\srun\siteratively\s\[Conformance\]\sshould\sonly\sdeploy\sthe\slast\sdeployment\s\[Suite\:openshift\/conformance\/parallel\/minimal\]$'
I0918 05:59:27.304265   32103 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep 18 05:59:55.603: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep 18 05:59:58.104: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep 18 06:00:06.105: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
Sep 18 06:00:06.105: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready.
Sep 18 06:00:06.105: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep 18 06:00:06.704: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'ibm-keepalived-watcher' (0 seconds elapsed)
Sep 18 06:00:06.704: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'ibmcloud-block-storage-driver' (0 seconds elapsed)
Sep 18 06:00:06.704: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'local-provisioner-configurator' (0 seconds elapsed)
Sep 18 06:00:06.704: INFO: e2e test version: v1.17.1
Sep 18 06:00:06.900: INFO: kube-apiserver version: v1.17.1
Sep 18 06:00:07.201: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:58
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Creating a kubernetes client
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/client.go:121
Sep 18 06:00:15.903: INFO: configPath is now "/tmp/configfile486344049"
Sep 18 06:00:15.903: INFO: The user is now "e2e-test-cli-deployment-h4rpb-user"
Sep 18 06:00:15.903: INFO: Creating project "e2e-test-cli-deployment-h4rpb"
Sep 18 06:00:16.900: INFO: Waiting on permissions in project "e2e-test-cli-deployment-h4rpb" ...
Sep 18 06:00:16.910: INFO: Waiting for ServiceAccount "default" to be provisioned...
Sep 18 06:00:17.800: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Sep 18 06:00:18.300: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Sep 18 06:00:18.700: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Sep 18 06:00:19.400: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Sep 18 06:00:19.800: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Sep 18 06:00:35.600: INFO: Project "e2e-test-cli-deployment-h4rpb" has been fully provisioned.
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:51
[It] [Top Level] [Feature:DeploymentConfig] deploymentconfigs when run iteratively [Conformance] should only deploy the last deployment [Suite:openshift/conformance/parallel/minimal]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:114
Sep 18 06:00:42.403: INFO: 00: cancelling deployment
Sep 18 06:00:42.403: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 rollout cancel dc/deployment-simple'
Sep 18 06:01:56.600: INFO: 01: cancelling deployment
Sep 18 06:01:56.600: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 rollout cancel dc/deployment-simple'
Sep 18 06:02:59.800: INFO: 02: cancelling deployment
Sep 18 06:02:59.800: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 rollout cancel dc/deployment-simple'
Sep 18 06:04:36.499: INFO: 03: waiting for current deployment to start running
Sep 18 06:04:37.800: INFO: deployment e2e-test-cli-deployment-h4rpb/deployment-simple-1 reached state "Complete"
Sep 18 06:04:37.800: INFO: 04: waiting for current deployment to start running
Sep 18 06:04:38.201: INFO: deployment e2e-test-cli-deployment-h4rpb/deployment-simple-1 reached state "Complete"
Sep 18 06:04:38.201: INFO: 05: waiting for current deployment to start running
Sep 18 06:04:39.000: INFO: deployment e2e-test-cli-deployment-h4rpb/deployment-simple-1 reached state "Complete"
Sep 18 06:04:39.000: INFO: 06: cancelling deployment
Sep 18 06:04:39.001: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 rollout cancel dc/deployment-simple'
Sep 18 06:05:43.400: INFO: 07: triggering a new deployment with config change
Sep 18 06:05:43.400: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 set env dc/deployment-simple A=7'
Sep 18 06:06:58.500: INFO: 08: triggering a new deployment with config change
Sep 18 06:06:58.500: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 set env dc/deployment-simple A=8'
Sep 18 06:07:52.700: INFO: 09: triggering a new deployment with config change
Sep 18 06:07:52.700: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 set env dc/deployment-simple A=9'
Sep 18 06:08:59.200: INFO: 10: cancelling deployment
Sep 18 06:08:59.200: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 rollout cancel dc/deployment-simple'
Sep 18 06:10:12.900: INFO: 11: cancelling deployment
Sep 18 06:10:12.900: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 rollout cancel dc/deployment-simple'
Sep 18 06:11:12.000: INFO: 12: cancelling deployment
Sep 18 06:11:12.000: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 rollout cancel dc/deployment-simple'
Sep 18 06:12:05.000: INFO: 13: cancelling deployment
Sep 18 06:12:05.200: INFO: Running 'oc --namespace=e2e-test-cli-deployment-h4rpb --kubeconfig=/tmp/configfile486344049 rollout cancel dc/deployment-simple'

---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
Sep 18 06:12:44.700: INFO: Running AfterSuite actions on all nodes
Sep 18 06:12:44.700: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-test-cli-deployment-h4rpb" for this suite.
Sep 18 06:12:44.901: INFO: Running AfterSuite actions on node 1
				
				Click to see stdout/stderrfrom junit_e2e_20200918-081232.xml

Find replicas mentions in log files


openshift-tests [Feature:Platform] Managed cluster [Top Level] [Feature:Platform] Managed cluster should ensure pods use downstream images from our release image with proper ImagePullPolicy [Suite:openshift/conformance/parallel] 15m3s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Platform\]\sManaged\scluster\s\[Top\sLevel\]\s\[Feature\:Platform\]\sManaged\scluster\sshould\sensure\spods\suse\sdownstream\simages\sfrom\sour\srelease\simage\swith\sproper\sImagePullPolicy\s\[Suite\:openshift\/conformance\/parallel\]$'
command terminated with exit code 1
 [] <nil> 0xc001bb66c0 exit status 1 <nil> <nil> true [0xc001d7fee0 0xc001d7ff08 0xc001d7ff08] [0xc001d7fee0 0xc001d7ff08] [0xc001d7fee8 0xc001d7ff00] [0xa15880 0xa159b0] 0xc000ee1740 <nil>}:
time="2020-09-17T23:08:53-05:00" level=error msg="exec failed: container_linux.go:349: starting container process caused \"exec: \\\"cat\\\": executable file not found in $PATH\""
exec failed: container_linux.go:349: starting container process caused "exec: \"cat\": executable file not found in $PATH"
command terminated with exit code 1
Sep 18 04:08:54.400: INFO: unable to run command:[exec coredns-fb9bd9db6-8zbcm -c coredns -- cat /etc/redhat-release] with error: exit status 1
Sep 18 04:08:54.400: INFO: Running 'oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-file-plugin-75bc676fb4-qklmg -c ibm-file-plugin-container -- cat /etc/redhat-release'
Sep 18 04:09:53.403: INFO: Error running &{/usr/bin/oc [oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-file-plugin-75bc676fb4-qklmg -c ibm-file-plugin-container -- cat /etc/redhat-release] []   cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
 cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
 [] <nil> 0xc001ae23f0 exit status 1 <nil> <nil> true [0xc000010540 0xc0013ae008 0xc0013ae008] [0xc000010540 0xc0013ae008] [0xc000010550 0xc000340cc0] [0xa15880 0xa159b0] 0xc001028e40 <nil>}:
cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
Sep 18 04:09:53.403: INFO: unable to run command:[exec ibm-file-plugin-75bc676fb4-qklmg -c ibm-file-plugin-container -- cat /etc/redhat-release] with error: exit status 1
Sep 18 04:09:53.403: INFO: Running 'oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-keepalived-watcher-gvhr4 -c keepalived-watcher -- cat /etc/redhat-release'
Sep 18 04:11:07.200: INFO: Error running &{/usr/bin/oc [oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-keepalived-watcher-gvhr4 -c keepalived-watcher -- cat /etc/redhat-release] []   cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
 cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
 [] <nil> 0xc001e7e3f0 exit status 1 <nil> <nil> true [0xc0001dc540 0xc0001dca88 0xc0001dca88] [0xc0001dc540 0xc0001dca88] [0xc0001dca68 0xc0001dca80] [0xa15880 0xa159b0] 0xc00124af60 <nil>}:
cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
Sep 18 04:11:07.200: INFO: unable to run command:[exec ibm-keepalived-watcher-gvhr4 -c keepalived-watcher -- cat /etc/redhat-release] with error: exit status 1
Sep 18 04:11:07.200: INFO: Running 'oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-keepalived-watcher-w2kvs -c keepalived-watcher -- cat /etc/redhat-release'
Sep 18 04:12:15.999: INFO: Error running &{/usr/bin/oc [oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-keepalived-watcher-w2kvs -c keepalived-watcher -- cat /etc/redhat-release] []   cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
 cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
 [] <nil> 0xc001ae2390 exit status 1 <nil> <nil> true [0xc0000104c0 0xc0013ae008 0xc0013ae008] [0xc0000104c0 0xc0013ae008] [0xc000010540 0xc000010720] [0xa15880 0xa159b0] 0xc001028e40 <nil>}:
cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
Sep 18 04:12:16.000: INFO: unable to run command:[exec ibm-keepalived-watcher-w2kvs -c keepalived-watcher -- cat /etc/redhat-release] with error: exit status 1
Sep 18 04:12:16.000: INFO: Running 'oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-master-proxy-static-10.221.250.246 -c ibm-master-proxy-static -- cat /etc/redhat-release'
Sep 18 04:13:28.000: INFO: Error running &{/usr/bin/oc [oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-master-proxy-static-10.221.250.246 -c ibm-master-proxy-static -- cat /etc/redhat-release] []   cat: /etc/redhat-release: No such file or directory
command terminated with exit code 1
 cat: /etc/redhat-release: No such file or directory
command terminated with exit code 1
 [] <nil> 0xc001e7e480 exit status 1 <nil> <nil> true [0xc0001dca70 0xc001d7e018 0xc001d7e018] [0xc0001dca70 0xc001d7e018] [0xc0001dca78 0xc001d7e000] [0xa15880 0xa159b0] 0xc00124b140 <nil>}:
cat: /etc/redhat-release: No such file or directory
command terminated with exit code 1
Sep 18 04:13:28.100: INFO: unable to run command:[exec ibm-master-proxy-static-10.221.250.246 -c ibm-master-proxy-static -- cat /etc/redhat-release] with error: exit status 1
Sep 18 04:13:28.100: INFO: Running 'oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-master-proxy-static-10.221.250.246 -c pause -- cat /etc/redhat-release'
Sep 18 04:14:02.301: INFO: Error running &{/usr/bin/oc [oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-master-proxy-static-10.221.250.246 -c pause -- cat /etc/redhat-release] []   time="2020-09-17T23:14:01-05:00" level=error msg="exec failed: container_linux.go:349: starting container process caused \"exec: \\\"cat\\\": executable file not found in $PATH\""
exec failed: container_linux.go:349: starting container process caused "exec: \"cat\": executable file not found in $PATH"
command terminated with exit code 1
 time="2020-09-17T23:14:01-05:00" level=error msg="exec failed: container_linux.go:349: starting container process caused \"exec: \\\"cat\\\": executable file not found in $PATH\""
exec failed: container_linux.go:349: starting container process caused "exec: \"cat\": executable file not found in $PATH"
command terminated with exit code 1
 [] <nil> 0xc001e7e4b0 exit status 1 <nil> <nil> true [0xc0001dca68 0xc0000104c0 0xc0000104c0] [0xc0001dca68 0xc0000104c0] [0xc0001dca80 0xc0001dd080] [0xa15880 0xa159b0] 0xc00124ade0 <nil>}:
time="2020-09-17T23:14:01-05:00" level=error msg="exec failed: container_linux.go:349: starting container process caused \"exec: \\\"cat\\\": executable file not found in $PATH\""
exec failed: container_linux.go:349: starting container process caused "exec: \"cat\": executable file not found in $PATH"
command terminated with exit code 1
Sep 18 04:14:02.302: INFO: unable to run command:[exec ibm-master-proxy-static-10.221.250.246 -c pause -- cat /etc/redhat-release] with error: exit status 1
Sep 18 04:14:02.302: INFO: Running 'oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-master-proxy-static-10.221.250.251 -c ibm-master-proxy-static -- cat /etc/redhat-release'
Sep 18 04:14:46.900: INFO: Error running &{/usr/bin/oc [oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-master-proxy-static-10.221.250.251 -c ibm-master-proxy-static -- cat /etc/redhat-release] []   cat: /etc/redhat-release: No such file or directory
command terminated with exit code 1
 cat: /etc/redhat-release: No such file or directory
command terminated with exit code 1
 [] <nil> 0xc001ae23f0 exit status 1 <nil> <nil> true [0xc000340cc0 0xc001d7e0a0 0xc001d7e0a0] [0xc000340cc0 0xc001d7e0a0] [0xc001d7e038 0xc001d7e088] [0xa15880 0xa159b0] 0xc001028fc0 <nil>}:
cat: /etc/redhat-release: No such file or directory
command terminated with exit code 1
Sep 18 04:14:47.200: INFO: unable to run command:[exec ibm-master-proxy-static-10.221.250.251 -c ibm-master-proxy-static -- cat /etc/redhat-release] with error: exit status 1
Sep 18 04:14:47.200: INFO: Running 'oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-master-proxy-static-10.221.250.251 -c pause -- cat /etc/redhat-release'
Sep 18 04:16:11.700: INFO: Error running &{/usr/bin/oc [oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-master-proxy-static-10.221.250.251 -c pause -- cat /etc/redhat-release] []   time="2020-09-17T23:16:10-05:00" level=error msg="exec failed: container_linux.go:349: starting container process caused \"exec: \\\"cat\\\": executable file not found in $PATH\""
exec failed: container_linux.go:349: starting container process caused "exec: \"cat\": executable file not found in $PATH"
command terminated with exit code 1
 time="2020-09-17T23:16:10-05:00" level=error msg="exec failed: container_linux.go:349: starting container process caused \"exec: \\\"cat\\\": executable file not found in $PATH\""
exec failed: container_linux.go:349: starting container process caused "exec: \"cat\": executable file not found in $PATH"
command terminated with exit code 1
 [] <nil> 0xc001ae2390 exit status 1 <nil> <nil> true [0xc0001dc540 0xc0001dca88 0xc0001dca88] [0xc0001dc540 0xc0001dca88] [0xc0001dca68 0xc0001dca80] [0xa15880 0xa159b0] 0xc00124af00 <nil>}:
time="2020-09-17T23:16:10-05:00" level=error msg="exec failed: container_linux.go:349: starting container process caused \"exec: \\\"cat\\\": executable file not found in $PATH\""
exec failed: container_linux.go:349: starting container process caused "exec: \"cat\": executable file not found in $PATH"
command terminated with exit code 1
Sep 18 04:16:11.700: INFO: unable to run command:[exec ibm-master-proxy-static-10.221.250.251 -c pause -- cat /etc/redhat-release] with error: exit status 1
Sep 18 04:16:11.700: INFO: Running 'oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-storage-watcher-767f89b9b4-7j9sz -c ibm-storage-watcher-container -- cat /etc/redhat-release'
Sep 18 04:17:14.102: INFO: Error running &{/usr/bin/oc [oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibm-storage-watcher-767f89b9b4-7j9sz -c ibm-storage-watcher-container -- cat /etc/redhat-release] []   cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
 cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
 [] <nil> 0xc001ae2780 exit status 1 <nil> <nil> true [0xc0001dd078 0xc001d7e068 0xc001d7e068] [0xc0001dd078 0xc001d7e068] [0xc0001dd080 0xc001d7e050] [0xa15880 0xa159b0] 0xc00124b440 <nil>}:
cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
Sep 18 04:17:14.102: INFO: unable to run command:[exec ibm-storage-watcher-767f89b9b4-7j9sz -c ibm-storage-watcher-container -- cat /etc/redhat-release] with error: exit status 1
Sep 18 04:17:14.102: INFO: Running 'oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibmcloud-block-storage-driver-d29xd -c ibmcloud-block-storage-driver-container -- cat /etc/redhat-release'
Sep 18 04:18:34.599: INFO: Error running &{/usr/bin/oc [oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibmcloud-block-storage-driver-d29xd -c ibmcloud-block-storage-driver-container -- cat /etc/redhat-release] []   cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
 cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
 [] <nil> 0xc001e7e450 exit status 1 <nil> <nil> true [0xc000010540 0xc0001dca68 0xc0001dca68] [0xc000010540 0xc0001dca68] [0xc000010550 0xc0001dc540] [0xa15880 0xa159b0] 0xc00124ade0 <nil>}:
cat: can't open '/etc/redhat-release': No such file or directory
command terminated with exit code 1
Sep 18 04:18:34.700: INFO: unable to run command:[exec ibmcloud-block-storage-driver-d29xd -c ibmcloud-block-storage-driver-container -- cat /etc/redhat-release] with error: exit status 1
Sep 18 04:18:34.700: INFO: Running 'oc --namespace=kube-system --kubeconfig=/data/kubeconfig/admin.kubeconfig exec ibmcloud-block-storage-driver-hch9g -c ibmcloud-block-storage-driver-container -- cat /etc/redhat-release'

---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
Sep 18 04:19:41.900: INFO: Running AfterSuite actions on all nodes
Sep 18 04:19:41.900: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
Sep 18 04:19:43.600: INFO: Running AfterSuite actions on node 1
				
				Click to see stdout/stderrfrom junit_e2e_20200918-081232.xml

Filter through log files


openshift-tests [Feature:ProjectAPI] TestProjectWatch [Top Level] [Feature:ProjectAPI] TestProjectWatch should succeed [Suite:openshift/conformance/parallel] 3m12s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:ProjectAPI\]\s\sTestProjectWatch\s\[Top\sLevel\]\s\[Feature\:ProjectAPI\]\s\sTestProjectWatch\sshould\ssucceed\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/project/project.go:233]: timeout: e2e-test-project-api-5dk9d
				
				Click to see stdout/stderrfrom junit_e2e_20200918-081232.xml

Filter through log files


openshift-tests [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should have important platform topology metrics [Suite:openshift/conformance/parallel/minimal] 15m3s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Prometheus\]\[Conformance\]\sPrometheus\swhen\sinstalled\son\sthe\scluster\s\[Top\sLevel\]\s\[Feature\:Prometheus\]\[Conformance\]\sPrometheus\swhen\sinstalled\son\sthe\scluster\sshould\shave\simportant\splatform\stopology\smetrics\s\[Suite\:openshift\/conformance\/parallel\/minimal\]$'
I0918 03:41:04.303587    1031 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Sep 18 03:41:15.602: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable
Sep 18 03:41:17.104: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep 18 03:41:19.003: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (1 seconds elapsed)
Sep 18 03:41:19.103: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready.
Sep 18 03:41:19.103: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start
Sep 18 03:41:19.301: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'ibm-keepalived-watcher' (0 seconds elapsed)
Sep 18 03:41:19.301: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'ibmcloud-block-storage-driver' (0 seconds elapsed)
Sep 18 03:41:19.301: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'local-provisioner-configurator' (0 seconds elapsed)
Sep 18 03:41:19.301: INFO: e2e test version: v1.17.1
Sep 18 03:41:19.500: INFO: kube-apiserver version: v1.17.1
Sep 18 03:41:19.701: INFO: Cluster IP family: ipv4
[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:58
[BeforeEach] [Feature:Prometheus][Conformance] Prometheus
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Creating a kubernetes client
[BeforeEach] [Feature:Prometheus][Conformance] Prometheus
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus.go:92
[It] [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should have important platform topology metrics [Suite:openshift/conformance/parallel/minimal]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/prometheus/prometheus.go:233
Sep 18 03:46:09.301: INFO: configPath is now "/tmp/configfile559249367"
Sep 18 03:46:09.301: INFO: The user is now "e2e-test-prometheus-6vfzb-user"
Sep 18 03:46:09.301: INFO: Creating project "e2e-test-prometheus-6vfzb"
Sep 18 03:46:09.901: INFO: Waiting on permissions in project "e2e-test-prometheus-6vfzb" ...
Sep 18 03:46:10.101: INFO: Waiting for ServiceAccount "default" to be provisioned...
Sep 18 03:46:10.501: INFO: Waiting for ServiceAccount "deployer" to be provisioned...
Sep 18 03:46:10.800: INFO: Waiting for ServiceAccount "builder" to be provisioned...
Sep 18 03:46:11.400: INFO: Waiting for RoleBinding "system:image-pullers" to be provisioned...
Sep 18 03:46:12.300: INFO: Waiting for RoleBinding "system:image-builders" to be provisioned...
Sep 18 03:46:12.800: INFO: Waiting for RoleBinding "system:deployers" to be provisioned...
Sep 18 03:46:28.400: INFO: Project "e2e-test-prometheus-6vfzb" has been fully provisioned.
Sep 18 03:46:28.402: INFO: Creating new exec pod
STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""}
Sep 18 03:46:47.128: INFO: Running '/usr/bin/kubectl --server=https://gatetestd12-224397879d6b490d1c67ac6f9ba76252-0001.us-south.containers.appdomain.cloud:30690 --kubeconfig=/data/kubeconfig/admin.kubeconfig exec --namespace=e2e-test-prometheus-6vfzb execpodw27kj -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"'
Sep 18 03:47:46.902: INFO: stderr: "+ curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n"
Sep 18 03:47:46.903: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}"
Sep 18 03:47:46.903: INFO: promQL query: cluster_infrastructure_provider{type!=""} had reported incorrect results:
[]
STEP: perform prometheus metric query cluster_feature_set
Sep 18 03:47:46.903: INFO: Running '/usr/bin/kubectl --server=https://gatetestd12-224397879d6b490d1c67ac6f9ba76252-0001.us-south.containers.appdomain.cloud:30690 --kubeconfig=/data/kubeconfig/admin.kubeconfig exec --namespace=e2e-test-prometheus-6vfzb execpodw27kj -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"'
Sep 18 03:48:40.900: INFO: stderr: "+ curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n"
Sep 18 03:48:40.900: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}"
Sep 18 03:48:40.900: INFO: promQL query: cluster_feature_set had reported incorrect results:
[]
STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""}
Sep 18 03:48:40.901: INFO: Running '/usr/bin/kubectl --server=https://gatetestd12-224397879d6b490d1c67ac6f9ba76252-0001.us-south.containers.appdomain.cloud:30690 --kubeconfig=/data/kubeconfig/admin.kubeconfig exec --namespace=e2e-test-prometheus-6vfzb execpodw27kj -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"'
Sep 18 03:49:34.102: INFO: stderr: "+ curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D'\n"
Sep 18 03:49:34.102: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}"
Sep 18 03:49:34.102: INFO: promQL query: cluster_installer{type!="",invoker!=""} had reported incorrect results:
[]
STEP: perform prometheus metric query instance:etcd_object_counts:sum > 0
Sep 18 03:49:34.102: INFO: Running '/usr/bin/kubectl --server=https://gatetestd12-224397879d6b490d1c67ac6f9ba76252-0001.us-south.containers.appdomain.cloud:30690 --kubeconfig=/data/kubeconfig/admin.kubeconfig exec --namespace=e2e-test-prometheus-6vfzb execpodw27kj -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0"'
Sep 18 03:50:25.500: INFO: stderr: "+ curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=instance%3Aetcd_object_counts%3Asum+%3E+0'\n"
Sep 18 03:50:25.500: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[{\"metric\":{\"__name__\":\"instance:etcd_object_counts:sum\",\"instance\":\"172.20.0.1:2040\"},\"value\":[1600401024.343,\"4243\"]}]}}"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_cores:sum{label_kubernetes_io_arch!="",label_node_role_kubernetes_io_master!=""}) > 0
Sep 18 03:50:25.500: INFO: Running '/usr/bin/kubectl --server=https://gatetestd12-224397879d6b490d1c67ac6f9ba76252-0001.us-south.containers.appdomain.cloud:30690 --kubeconfig=/data/kubeconfig/admin.kubeconfig exec --namespace=e2e-test-prometheus-6vfzb execpodw27kj -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Sep 18 03:51:21.500: INFO: stderr: "+ curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_cores%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Sep 18 03:51:21.500: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[{\"metric\":{},\"value\":[1600401080.857,\"16\"]}]}}"
STEP: perform prometheus metric query sum(node_role_os_version_machine:cpu_capacity_sockets:sum{label_kubernetes_io_arch!="",label_node_hyperthread_enabled!="",label_node_role_kubernetes_io_master!=""}) > 0
Sep 18 03:51:21.500: INFO: Running '/usr/bin/kubectl --server=https://gatetestd12-224397879d6b490d1c67ac6f9ba76252-0001.us-south.containers.appdomain.cloud:30690 --kubeconfig=/data/kubeconfig/admin.kubeconfig exec --namespace=e2e-test-prometheus-6vfzb execpodw27kj -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0"'
Sep 18 03:52:20.703: INFO: stderr: "+ curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=sum%28node_role_os_version_machine%3Acpu_capacity_sockets%3Asum%7Blabel_kubernetes_io_arch%21%3D%22%22%2Clabel_node_hyperthread_enabled%21%3D%22%22%2Clabel_node_role_kubernetes_io_master%21%3D%22%22%7D%29+%3E+0'\n"
Sep 18 03:52:20.703: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[{\"metric\":{},\"value\":[1600401140.1,\"2\"]}]}}"
STEP: perform prometheus metric query cluster_infrastructure_provider{type!=""}
Sep 18 03:52:30.703: INFO: Running '/usr/bin/kubectl --server=https://gatetestd12-224397879d6b490d1c67ac6f9ba76252-0001.us-south.containers.appdomain.cloud:30690 --kubeconfig=/data/kubeconfig/admin.kubeconfig exec --namespace=e2e-test-prometheus-6vfzb execpodw27kj -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D"'
Sep 18 03:53:30.300: INFO: stderr: "+ curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_infrastructure_provider%7Btype%21%3D%22%22%7D'\n"
Sep 18 03:53:30.300: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}"
STEP: perform prometheus metric query cluster_feature_set
Sep 18 03:53:30.300: INFO: Running '/usr/bin/kubectl --server=https://gatetestd12-224397879d6b490d1c67ac6f9ba76252-0001.us-south.containers.appdomain.cloud:30690 --kubeconfig=/data/kubeconfig/admin.kubeconfig exec --namespace=e2e-test-prometheus-6vfzb execpodw27kj -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set"'
Sep 18 03:54:44.900: INFO: stderr: "+ curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_feature_set'\n"
Sep 18 03:54:44.900: INFO: stdout: "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[]}}"
STEP: perform prometheus metric query cluster_installer{type!="",invoker!=""}
Sep 18 03:54:44.900: INFO: Running '/usr/bin/kubectl --server=https://gatetestd12-224397879d6b490d1c67ac6f9ba76252-0001.us-south.containers.appdomain.cloud:30690 --kubeconfig=/data/kubeconfig/admin.kubeconfig exec --namespace=e2e-test-prometheus-6vfzb execpodw27kj -- /bin/sh -x -c curl -s -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IlRlR3A2ekVhZnZ6TEdYN3BPNldwZTc5SklObXJmM0VtYXRCSjZUMV9oSHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWFkYXB0ZXItdG9rZW4tNHRidzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicHJvbWV0aGV1cy1hZGFwdGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzI3MjE2NjMtMmZjMi00YzdhLThkNjItMmMwNzE5NWY3NDhhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1tb25pdG9yaW5nOnByb21ldGhldXMtYWRhcHRlciJ9.vP_kH8eOUqOrgt5tCps4cG3BhXrq4BCgnLLemRjEeCv2Ws5SKV_bwhHekNwNI4yj7zscIrccYVnFRJXcSgCkMh-zLccbhqbf03IAEnTvBZc9OTxkpWMEByLuiFByKdRbkxBrbV6ZIlebeWZHD1gFsG9bYVCV91EL2f1WfT36h-Yx47i_czsbHdzAjqskuyOCpKJyFcricugBOkCA7pqsgGRLXhJpsxRSRfmXsXirKqXh9jG9lbfa5uMvAlaLDQt85S_xS0_Fe4WGf0axJfx8VeC-x83MZ1jRs_GJ-7IKwYA95BUArVQOsdlzY1hdOOkSgq2daQlW5ebisYChkQM_fw' "https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query?query=cluster_installer%7Btype%21%3D%22%22%2Cinvoker%21%3D%22%22%7D"'

---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
Sep 18 03:55:18.401: INFO: Running AfterSuite actions on all nodes
Sep 18 03:55:18.401: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-test-prometheus-6vfzb" for this suite.
Sep 18 03:55:20.700: INFO: Running AfterSuite actions on node 1
				
				Click to see stdout/stderrfrom junit_e2e_20200918-081232.xml

Find replicas mentions in log files


openshift-tests [Feature:Prometheus][Late] Alerts [Top Level] [Feature:Prometheus][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Suite:openshift/conformance/parallel] 1m59s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Feature\:Prometheus\]\[Late\]\sAlerts\s\[Top\sLevel\]\s\[Feature\:Prometheus\]\[Late\]\sAlerts\sshouldn\'t\sreport\sany\salerts\sin\sfiring\sstate\sapart\sfrom\sWatchdog\sand\sAlertmanagerReceiversNotConfigured\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/util/prometheus/helpers.go:174]: Expected
    <map[string]error | len:1>: {
        "count_over_time(ALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|KubeAPILatencyHigh|FailingOperator|ImagePruningDisabled\",alertstate=\"firing\"}[2h]) >= 1": {
            s: "promQL query: count_over_time(ALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|KubeAPILatencyHigh|FailingOperator|ImagePruningDisabled\",alertstate=\"firing\"}[2h]) >= 1 had reported incorrect results:\n[{\"metric\":{\"alertname\":\"TargetDown\",\"alertstate\":\"firing\",\"job\":\"openshift-apiserver\",\"namespace\":\"default\",\"service\":\"openshift-apiserver\",\"severity\":\"warning\"},\"value\":[1600416740.302,\"25\"]}]",
        },
    }
to be empty
				
				Click to see stdout/stderrfrom junit_e2e_20200918-081232.xml

Filter through log files


openshift-tests [Suite:openshift/oauth/htpasswd] HTPasswd IDP [Top Level] [Suite:openshift/oauth/htpasswd] HTPasswd IDP should successfully configure htpasswd and be responsive [Suite:openshift/conformance/parallel] 9m54s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Suite\:openshift\/oauth\/htpasswd\]\sHTPasswd\sIDP\s\[Top\sLevel\]\s\[Suite\:openshift\/oauth\/htpasswd\]\sHTPasswd\sIDP\sshould\ssuccessfully\sconfigure\shtpasswd\sand\sbe\sresponsive\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/oauth/htpasswd.go:39]: Unexpected error:
    <http.tlsHandshakeTimeoutError>: {}
    net/http: TLS handshake timeout
occurred
				
				Click to see stdout/stderrfrom junit_e2e_20200918-081232.xml

Filter through log files


openshift-tests [Suite:openshift/oauth] LDAP IDP [Top Level] [Suite:openshift/oauth] LDAP IDP should authenticate against an ldap server [Suite:openshift/conformance/parallel] 11m34s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[Suite\:openshift\/oauth\]\sLDAP\sIDP\s\[Top\sLevel\]\s\[Suite\:openshift\/oauth\]\sLDAP\sIDP\sshould\sauthenticate\sagainst\san\sldap\sserver\s\[Suite\:openshift\/conformance\/parallel\]$'
fail [github.com/openshift/origin/test/extended/oauth/oauth_ldap.go:104]: Expected
    <http.tlsHandshakeTimeoutError>: {}
to match error
    <string>: challenger chose not to retry the request
				
				Click to see stdout/stderrfrom junit_e2e_20200918-081232.xml

Filter through log files


openshift-tests [cli] oc adm must-gather [Top Level] [cli] oc adm must-gather runs successfully [Suite:openshift/conformance/parallel] 15m6s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[cli\]\soc\sadm\smust\-gather\s\[Top\sLevel\]\s\[cli\]\soc\sadm\smust\-gather\sruns\ssuccessfully\s\[Suite\:openshift\/conformance\/parallel\]$'
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/monitoring.coreos.com/prometheusrules/openshift-service-catalog-apiserver-operator.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/monitoring.coreos.com/servicemonitors/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/monitoring.coreos.com/servicemonitors/openshift-service-catalog-apiserver-operator.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/pods/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/pods/openshift-service-catalog-apiserver-operator-58b8954db8-cfjhz/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/pods/openshift-service-catalog-apiserver-operator-58b8954db8-cfjhz/openshift-service-catalog-apiserver-operator-58b8954db8-cfjhz.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/pods/openshift-service-catalog-apiserver-operator-58b8954db8-cfjhz/operator/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/pods/openshift-service-catalog-apiserver-operator-58b8954db8-cfjhz/operator/operator/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/pods/openshift-service-catalog-apiserver-operator-58b8954db8-cfjhz/operator/operator/logs/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/pods/openshift-service-catalog-apiserver-operator-58b8954db8-cfjhz/operator/operator/logs/current.log
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/pods/openshift-service-catalog-apiserver-operator-58b8954db8-cfjhz/operator/operator/logs/previous.log
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/route.openshift.io/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-apiserver-operator/route.openshift.io/routes.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/apps.openshift.io/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/apps.openshift.io/deploymentconfigs.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/apps/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/apps/daemonsets.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/apps/deployments.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/apps/replicasets.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/apps/statefulsets.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/autoscaling/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/autoscaling/horizontalpodautoscalers.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/batch/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/batch/cronjobs.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/batch/jobs.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/build.openshift.io/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/build.openshift.io/buildconfigs.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/build.openshift.io/builds.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/core/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/core/configmaps.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/core/endpoints.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/core/events.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/core/persistentvolumeclaims.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/core/pods.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/core/replicationcontrollers.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/core/secrets.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/core/services.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/image.openshift.io/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/image.openshift.io/imagestreams.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/monitoring.coreos.com/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/monitoring.coreos.com/prometheusrules/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/monitoring.coreos.com/prometheusrules/openshift-service-catalog-controller-manager-operator.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/monitoring.coreos.com/servicemonitors/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/monitoring.coreos.com/servicemonitors/openshift-service-catalog-controller-manager-operator.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/pods/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/pods/openshift-service-catalog-controller-manager-operator-59b8vxc9v/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/pods/openshift-service-catalog-controller-manager-operator-59b8vxc9v/openshift-service-catalog-controller-manager-operator-59b8vxc9v.yaml
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/pods/openshift-service-catalog-controller-manager-operator-59b8vxc9v/operator/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/pods/openshift-service-catalog-controller-manager-operator-59b8vxc9v/operator/operator/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/pods/openshift-service-catalog-controller-manager-operator-59b8vxc9v/operator/operator/logs/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/pods/openshift-service-catalog-controller-manager-operator-59b8vxc9v/operator/operator/logs/current.log
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/pods/openshift-service-catalog-controller-manager-operator-59b8vxc9v/operator/operator/logs/previous.log
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/route.openshift.io/
[must-gather-ldhft] OUT namespaces/openshift-service-catalog-controller-manager-operator/route.openshift.io/routes.yaml
[must-gather-ldhft] OUT namespaces/openshift/
[must-gather-ldhft] OUT namespaces/openshift/openshift.yaml
[must-gather-ldhft] OUT namespaces/openshift/apps.openshift.io/
[must-gather-ldhft] OUT namespaces/openshift/apps.openshift.io/deploymentconfigs.yaml
[must-gather-ldhft] OUT namespaces/openshift/apps/
[must-gather-ldhft] OUT namespaces/openshift/apps/daemonsets.yaml
[must-gather-ldhft] OUT namespaces/openshift/apps/deployments.yaml
[must-gather-ldhft] OUT namespaces/openshift/apps/replicasets.yaml
[must-gather-ldhft] OUT namespaces/openshift/apps/statefulsets.yaml
[must-gather-ldhft] OUT namespaces/openshift/autoscaling/
[must-gather-ldhft] OUT namespaces/openshift/autoscaling/horizontalpodautoscalers.yaml
[must-gather-ldhft] OUT namespaces/openshift/batch/
[must-gather-ldhft] OUT namespaces/openshift/batch/cronjobs.yaml
[must-gather-ldhft] OUT namespaces/openshift/batch/jobs.yaml
[must-gather-ldhft] OUT namespaces/openshift/build.openshift.io/
[must-gather-ldhft] OUT namespaces/openshift/build.openshift.io/buildconfigs.yaml
[must-gather-ldhft] OUT namespaces/openshift/build.openshift.io/builds.yaml
[must-gather-ldhft] OUT namespaces/openshift/core/
[must-gather-ldhft] OUT namespaces/openshift/core/configmaps.yaml
[must-gather-ldhft] OUT namespaces/openshift/core/endpoints.yaml
[must-gather-ldhft] OUT namespaces/openshift/core/events.yaml
[must-gather-ldhft] OUT namespaces/openshift/core/persistentvolumeclaims.yaml
[must-gather-ldhft] OUT namespaces/openshift/core/pods.yaml
[must-gather-ldhft] OUT namespaces/openshift/core/replicationcontrollers.yaml
[must-gather-ldhft] OUT namespaces/openshift/core/secrets.yaml
[must-gather-ldhft] OUT namespaces/openshift/core/services.yaml
[must-gather-ldhft] OUT namespaces/openshift/image.openshift.io/
[must-gather-ldhft] OUT namespaces/openshift/image.openshift.io/imagestreams.yaml
[must-gather-ldhft] OUT namespaces/openshift/route.openshift.io/
[must-gather-ldhft] OUT namespaces/openshift/route.openshift.io/routes.yaml
[must-gather-ldhft] OUT 
[must-gather-ldhft] OUT sent 34,479 bytes  received 88,085,448 bytes  380,647.63 bytes/sec
[must-gather-ldhft] OUT total size is 87,903,451  speedup is 1.00
[must-gather      ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-xkdxh deleted
[must-gather      ] OUT namespace/openshift-must-gather-ns2sm deleted
STEP: /tmp/test.oc-adm-must-gather.063407855/registry-ng-bluemix-net-armada-master-ocp-release-sha256-a273f5ac7f1ad8f7ffab45205ac36c8dff92d9107ef3ae429eeb135fa8057b8b/audit_logs/kube-apiserver
STEP: /tmp/test.oc-adm-must-gather.063407855/registry-ng-bluemix-net-armada-master-ocp-release-sha256-a273f5ac7f1ad8f7ffab45205ac36c8dff92d9107ef3ae429eeb135fa8057b8b/audit_logs/openshift-apiserver

---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
Sep 18 05:30:18.201: INFO: Running AfterSuite actions on all nodes
Sep 18 05:30:18.201: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-test-oc-adm-must-gather-2fw48" for this suite.
Sep 18 05:30:22.300: INFO: Running AfterSuite actions on node 1