ResultFAILURE
Tests 7 failed / 46 succeeded
Started2019-11-01 17:57
Elapsed2h20m
Work namespaceci-op-s8qkptjz
pod4.3.0-0.nightly-2019-11-01-175335-metal-serial

Test Failures


openshift-tests Monitor cluster while tests execute 1h29m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
144 error level events were detected during this test run:

Nov 01 18:41:38.719 E ns/openshift-controller-manager pod/controller-manager-n2j6t node/master-2.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=controller-manager container exited with code 137 (Error): 
Nov 01 18:45:38.562 E ns/openshift-ingress pod/router-default-76b4cbd954-bm9w9 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:42:23.888Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:42:28.884Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:43:14.537Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:43:19.535Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:43:39.570Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:43:44.568Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:43:54.961Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:44:20.898Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:44:25.897Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:44:37.087Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:44:42.086Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 01 18:45:38.667 E ns/openshift-machine-config-operator pod/machine-config-daemon-blmn6 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=oauth-proxy container exited with code 143 (Error): 
Nov 01 18:45:38.680 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/01 18:27:59 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2019/11/01 18:39:23 config map updated\n2019/11/01 18:39:23 successfully triggered reload\n
Nov 01 18:45:38.680 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-proxy container exited with code 2 (Error): 2019/11/01 18:28:00 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 18:28:00 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:28:00 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:28:00 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/01 18:28:00 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:28:00 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 18:28:00 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:28:00 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 18:28:00 http.go:96: HTTPS: listening on [::]:9091\n
Nov 01 18:45:38.680 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-01T18:27:59.644742849Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-01T18:27:59.644826719Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-01T18:27:59.645797483Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-01T18:28:04.711657805Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-01T18:28:04.772597386Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2019-11-01T18:31:04.840245485Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 01 18:45:38.734 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-725t9 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=kube-state-metrics container exited with code 2 (Error): 
Nov 01 18:45:38.787 E ns/openshift-monitoring pod/prometheus-adapter-6cc66f59f9-xthvz node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-adapter container exited with code 2 (Error): I1101 18:22:59.593235       1 adapter.go:93] successfully using in-cluster auth\nI1101 18:23:00.128932       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 18:45:39.682 E ns/openshift-monitoring pod/prometheus-adapter-6cc66f59f9-49q7z node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-adapter container exited with code 2 (Error): I1101 18:22:59.611734       1 adapter.go:93] successfully using in-cluster auth\nI1101 18:22:59.845866       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 18:45:39.720 E ns/openshift-marketplace pod/community-operators-d946d6555-6746l node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=community-operators container exited with code 2 (Error): 
Nov 01 18:45:39.727 E ns/openshift-marketplace pod/certified-operators-886576ff4-dz2hg node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=certified-operators container exited with code 2 (Error): 
Nov 01 18:45:39.735 E ns/openshift-monitoring pod/alertmanager-main-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 18:27:46 Watching directory: "/etc/alertmanager/config"\n
Nov 01 18:45:39.735 E ns/openshift-monitoring pod/alertmanager-main-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 18:27:46 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:27:46 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:27:46 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:27:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 18:27:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:27:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:27:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:27:46 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 18:45:39.743 E ns/openshift-monitoring pod/grafana-69b9b6556-7jwc2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=grafana-proxy container exited with code 2 (Error): 
Nov 01 18:45:39.766 E ns/openshift-monitoring pod/alertmanager-main-2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 18:27:45 Watching directory: "/etc/alertmanager/config"\n
Nov 01 18:45:39.766 E ns/openshift-monitoring pod/alertmanager-main-2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 18:27:46 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:27:46 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:27:46 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:27:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 18:27:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:27:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:27:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:27:46 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 18:45:39.778 E ns/openshift-marketplace pod/redhat-operators-8648c86cfb-7mj7j node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=redhat-operators container exited with code 2 (Error): 
Nov 01 18:45:39.799 E ns/openshift-monitoring pod/alertmanager-main-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 18:27:45 Watching directory: "/etc/alertmanager/config"\n
Nov 01 18:45:39.799 E ns/openshift-monitoring pod/alertmanager-main-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 18:27:46 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:27:46 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:27:46 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:27:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 18:27:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:27:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:27:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:27:46 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 18:45:39.809 E ns/openshift-monitoring pod/telemeter-client-68dbb6b644-q6vbs node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=reload container exited with code 2 (Error): 
Nov 01 18:45:39.809 E ns/openshift-monitoring pod/telemeter-client-68dbb6b644-q6vbs node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=telemeter-client container exited with code 2 (Error): 
Nov 01 18:46:15.789 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T18:46:14.040Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T18:46:14.043Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T18:46:14.045Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T18:46:14.046Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T18:46:14.046Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T18:46:14.046Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T18:46:14.046Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T18:46:14.046Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T18:46:14.046Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T18:46:14.046Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T18:46:14.046Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T18:46:14.046Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T18:46:14.046Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T18:46:14.046Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T18:46:14.047Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T18:46:14.047Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 18:46:15.798 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T18:46:13.960Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T18:46:13.961Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T18:46:13.962Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T18:46:13.962Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T18:46:13.963Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T18:46:13.963Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T18:46:13.963Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T18:46:13.963Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T18:46:13.963Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T18:46:13.963Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T18:46:13.963Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T18:46:13.963Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T18:46:13.963Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T18:46:13.963Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T18:46:13.963Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T18:46:13.963Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 18:54:44.009 E ns/openshift-marketplace pod/community-operators-d946d6555-d5sm4 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=community-operators container exited with code 2 (Error): 
Nov 01 18:54:44.018 E ns/openshift-monitoring pod/openshift-state-metrics-674774b7c4-pbjk9 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=openshift-state-metrics container exited with code 2 (Error): 
Nov 01 18:54:44.024 E ns/openshift-marketplace pod/redhat-operators-8648c86cfb-knds8 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=redhat-operators container exited with code 2 (Error): 
Nov 01 18:54:44.042 E ns/openshift-monitoring pod/alertmanager-main-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 18:46:14 Watching directory: "/etc/alertmanager/config"\n
Nov 01 18:54:44.042 E ns/openshift-monitoring pod/alertmanager-main-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 18:46:14 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:46:14 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:46:14 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:46:14 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 18:46:14 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:46:14 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:46:14 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:46:14 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 18:54:44.054 E ns/openshift-monitoring pod/prometheus-adapter-6cc66f59f9-j8frr node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-adapter container exited with code 2 (Error): I1101 18:46:24.606018       1 adapter.go:93] successfully using in-cluster auth\nI1101 18:46:25.118719       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 18:54:44.063 E ns/openshift-monitoring pod/alertmanager-main-2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 18:46:14 Watching directory: "/etc/alertmanager/config"\n
Nov 01 18:54:44.063 E ns/openshift-monitoring pod/alertmanager-main-2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 18:46:14 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:46:14 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:46:14 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:46:14 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 18:46:14 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:46:14 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:46:14 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:46:14 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 18:54:44.072 E ns/openshift-monitoring pod/telemeter-client-68dbb6b644-8t4j7 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=reload container exited with code 2 (Error): 
Nov 01 18:54:44.072 E ns/openshift-monitoring pod/telemeter-client-68dbb6b644-8t4j7 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=telemeter-client container exited with code 2 (Error): 
Nov 01 18:54:44.089 E ns/openshift-ingress pod/router-default-7dbf667fc8-t8vw2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:46:32.043Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:46:37.042Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:46:42.039Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:46:47.079Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:46:52.075Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:50:11.663Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:50:16.664Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:50:21.665Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:52:48.039Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:52:53.137Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:54:42.317Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 01 18:54:44.101 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-tjrw2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=kube-state-metrics container exited with code 2 (Error): 
Nov 01 18:54:44.128 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/01 18:46:14 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 01 18:54:44.128 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-proxy container exited with code 2 (Error): 2019/11/01 18:46:15 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 18:46:15 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:46:15 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:46:15 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/01 18:46:15 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:46:15 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 18:46:15 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:46:15 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 18:46:15 http.go:96: HTTPS: listening on [::]:9091\n2019/11/01 18:46:24 oauthproxy.go:774: basicauth: 10.129.0.84:40498 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 18:50:54 oauthproxy.go:774: basicauth: 10.129.0.84:45152 Authorization header does not start with 'Basic', skipping basic authentication\n
Nov 01 18:54:44.128 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-01T18:46:14.345428262Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-01T18:46:14.345504427Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-01T18:46:14.443659252Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-01T18:46:19.414242575Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 01 18:54:44.140 E ns/openshift-machine-config-operator pod/machine-config-daemon-blpnn node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=oauth-proxy container exited with code 143 (Error): 
Nov 01 18:54:44.149 E ns/openshift-monitoring pod/grafana-69b9b6556-t45sx node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=grafana-proxy container exited with code 2 (Error): 
Nov 01 18:55:02.074 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T18:54:59.957Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T18:54:59.960Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T18:54:59.960Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T18:54:59.961Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T18:54:59.961Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T18:54:59.961Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T18:54:59.961Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T18:54:59.961Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T18:54:59.962Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T18:54:59.962Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T18:54:59.962Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T18:54:59.962Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T18:54:59.962Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T18:54:59.962Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T18:54:59.962Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T18:54:59.962Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 18:55:02.097 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T18:55:00.220Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T18:55:00.222Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T18:55:00.222Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T18:55:00.223Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T18:55:00.223Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T18:55:00.223Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T18:55:00.223Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T18:55:00.223Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T18:55:00.223Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T18:55:00.223Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T18:55:00.223Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T18:55:00.223Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T18:55:00.223Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T18:55:00.224Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T18:55:00.224Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T18:55:00.224Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 18:56:11.761 E ns/openshift-ingress pod/router-default-7dbf667fc8-9clf4 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=router container exited with code 2 (Error): listening on HTTP and HTTPS	{"address": "0.0.0.0:1936"}\n2019-11-01T18:54:55.678Z	INFO	router.template	template/router.go:294	watching for changes	{"path": "/etc/pki/tls/private"}\nE1101 18:54:55.679225       1 haproxy.go:395] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory\n2019-11-01T18:54:55.696Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:54:55.696Z	INFO	router.router	router/router.go:257	router is including routes in all namespaces\n2019-11-01T18:54:55.918Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:55:01.105Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:55:06.082Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:55:11.083Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:55:16.082Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:55:23.172Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:55:28.523Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:56:09.897Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 01 18:56:11.772 E ns/openshift-monitoring pod/prometheus-adapter-6cc66f59f9-pt4tl node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-adapter container exited with code 2 (Error): I1101 18:55:02.919222       1 adapter.go:93] successfully using in-cluster auth\nI1101 18:55:03.615354       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 18:56:11.792 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-t4vb4 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=kube-state-metrics container exited with code 2 (Error): 
Nov 01 18:56:11.820 E ns/openshift-marketplace pod/community-operators-d946d6555-9ww9s node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=community-operators container exited with code 2 (Error): 
Nov 01 18:56:11.824 E ns/openshift-marketplace pod/certified-operators-886576ff4-9pd4q node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=certified-operators container exited with code 2 (Error): 
Nov 01 18:56:11.832 E ns/openshift-monitoring pod/alertmanager-main-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 18:55:00 Watching directory: "/etc/alertmanager/config"\n
Nov 01 18:56:11.832 E ns/openshift-monitoring pod/alertmanager-main-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 18:55:01 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:55:01 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:55:01 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:55:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 18:55:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:55:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:55:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:55:01 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 18:56:11.847 E ns/openshift-monitoring pod/openshift-state-metrics-674774b7c4-qlq2s node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=openshift-state-metrics container exited with code 2 (Error): 
Nov 01 18:56:11.862 E ns/openshift-monitoring pod/alertmanager-main-2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 18:55:00 Watching directory: "/etc/alertmanager/config"\n
Nov 01 18:56:11.862 E ns/openshift-monitoring pod/alertmanager-main-2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 18:55:00 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:55:00 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:55:00 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:55:00 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 18:55:00 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:55:00 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 18:55:00 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:55:00 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 18:56:11.875 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/01 18:55:00 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 01 18:56:11.875 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-proxy container exited with code 2 (Error): 2019/11/01 18:55:00 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 18:55:00 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 18:55:00 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 18:55:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/01 18:55:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:55:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 18:55:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:55:01 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 18:55:01 http.go:96: HTTPS: listening on [::]:9091\n2019/11/01 18:55:05 oauthproxy.go:774: basicauth: 10.129.0.110:60154 Authorization header does not start with 'Basic', skipping basic authentication\n
Nov 01 18:56:11.875 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-01T18:55:00.435988819Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-01T18:55:00.43607383Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-01T18:55:00.437370648Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-01T18:55:05.521071993Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 01 18:56:11.927 E ns/openshift-monitoring pod/prometheus-adapter-6cc66f59f9-qmz2v node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-adapter container exited with code 2 (Error): I1101 18:55:03.358540       1 adapter.go:93] successfully using in-cluster auth\nI1101 18:55:03.890162       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 18:56:11.946 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 18:56:11.946 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 18:56:11.946 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 18:56:11.946 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 18:56:11.946 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 18:56:11.946 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 18:56:11.946 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 18:56:11.960 E ns/openshift-monitoring pod/telemeter-client-68dbb6b644-dw7mm node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=reload container exited with code 2 (Error): 
Nov 01 18:56:11.960 E ns/openshift-monitoring pod/telemeter-client-68dbb6b644-dw7mm node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=telemeter-client container exited with code 2 (Error): 
Nov 01 18:56:11.986 E ns/openshift-monitoring pod/grafana-69b9b6556-hsdgl node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=grafana-proxy container exited with code 2 (Error): 
Nov 01 18:56:11.989 E ns/openshift-machine-config-operator pod/machine-config-daemon-58n72 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=oauth-proxy container exited with code 143 (Error): 
Nov 01 18:57:26.000 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T18:57:24.056Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T18:57:24.059Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T18:57:24.060Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T18:57:24.060Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T18:57:24.060Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T18:57:24.060Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T18:57:24.060Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T18:57:24.060Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T18:57:24.060Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T18:57:24.060Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T18:57:24.061Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T18:57:24.061Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T18:57:24.061Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T18:57:24.061Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T18:57:24.061Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=info ts=2019-11-01T18:57:24.061Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=error ts=2019-11-01
Nov 01 18:57:26.015 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T18:57:23.966Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T18:57:23.976Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T18:57:23.976Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T18:57:23.978Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T18:57:23.978Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T18:57:23.978Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T18:57:23.979Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T18:57:23.979Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T18:57:23.979Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T18:57:23.979Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T18:57:23.979Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T18:57:23.979Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T18:57:23.979Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T18:57:23.980Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T18:57:23.985Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T18:57:23.985Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 19:07:33.083 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Nov 01 19:14:16.911 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator ingress is reporting a failure: Some ingresscontrollers are degraded: default
Nov 01 19:23:42.865 E ns/openshift-marketplace pod/certified-operators-886576ff4-5rrnk node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=certified-operators container exited with code 2 (Error): 
Nov 01 19:42:40.849 E ns/openshift-marketplace pod/certified-operators-54459f97bc-rtxtd node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=certified-operators container exited with code 2 (Error): 
Nov 01 19:42:40.925 E ns/openshift-ingress pod/router-default-7dbf667fc8-8twgl node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=router container exited with code 2 (Error): Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T18:57:52.074Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:01:21.488Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:01:26.483Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:23:18.205Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:23:23.201Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:23:42.259Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:23:47.257Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:35:57.628Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:36:02.631Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:36:07.629Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:42:39.917Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 01 19:42:40.934 E ns/openshift-machine-config-operator pod/machine-config-daemon-npfrw node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=oauth-proxy container exited with code 143 (Error): 
Nov 01 19:42:40.944 E ns/openshift-marketplace pod/community-operators-d946d6555-qz8m5 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=community-operators container exited with code 2 (Error): 
Nov 01 19:42:41.020 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-proxy container exited with code 2 (Error):  path "/" => upstream "http://localhost:9090/"\n2019/11/01 18:57:25 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 18:57:25 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 18:57:25 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 18:57:25 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 18:57:25 http.go:96: HTTPS: listening on [::]:9091\n2019/11/01 18:58:25 oauthproxy.go:774: basicauth: 10.129.0.131:57120 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 19:02:55 oauthproxy.go:774: basicauth: 10.129.0.131:32954 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 19:07:25 oauthproxy.go:774: basicauth: 10.129.0.131:36840 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 19:11:55 oauthproxy.go:774: basicauth: 10.129.0.131:40730 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 19:16:25 oauthproxy.go:774: basicauth: 10.129.0.131:44644 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 19:20:55 oauthproxy.go:774: basicauth: 10.129.0.131:48532 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 19:25:25 oauthproxy.go:774: basicauth: 10.129.0.131:52460 Authorization header does not start with 'Basic', skipping basic authentication\n2019/11/01 19:29:55 oauthproxy.go:774: basicauth: 10.129.0.131:56350 Authorization header does not start with 'Basic', skipping basic authentication\n2019/
Nov 01 19:42:41.020 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/01 18:57:24 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 01 19:42:41.020 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-01T18:57:24.343299003Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-01T18:57:24.34338241Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-01T18:57:24.344148514Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-01T18:57:29.406093504Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 01 19:42:41.026 E ns/openshift-monitoring pod/prometheus-adapter-6cc66f59f9-ph95s node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-adapter container exited with code 2 (Error): I1101 18:57:23.516422       1 adapter.go:93] successfully using in-cluster auth\nI1101 18:57:23.950397       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 19:42:41.049 E ns/openshift-marketplace pod/redhat-operators-8648c86cfb-x2bh5 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=redhat-operators container exited with code 2 (Error): 
Nov 01 19:42:41.057 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-kztp7 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=kube-state-metrics container exited with code 2 (Error): 
Nov 01 19:42:42.102 E ns/openshift-monitoring pod/grafana-69b9b6556-dsdmm node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=grafana-proxy container exited with code 2 (Error): 
Nov 01 19:42:42.130 E ns/openshift-monitoring pod/prometheus-adapter-6cc66f59f9-cx8xf node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-adapter container exited with code 2 (Error): I1101 18:57:23.563080       1 adapter.go:93] successfully using in-cluster auth\nI1101 18:57:24.309346       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 19:43:56.267 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T19:43:53.385Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T19:43:53.387Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T19:43:53.391Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T19:43:53.391Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T19:43:53.391Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T19:43:53.392Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T19:43:53.392Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T19:43:53.392Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T19:43:53.392Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T19:43:53.392Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T19:43:53.392Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T19:43:53.392Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T19:43:53.392Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T19:43:53.392Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T19:43:53.392Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T19:43:53.392Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 19:43:56.277 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T19:43:53.437Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T19:43:53.439Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T19:43:53.440Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T19:43:53.440Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T19:43:53.440Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T19:43:53.440Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T19:43:53.440Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T19:43:53.440Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T19:43:53.440Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T19:43:53.440Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T19:43:53.440Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T19:43:53.440Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T19:43:53.440Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T19:43:53.440Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T19:43:53.441Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T19:43:53.441Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 19:44:51.530 E ns/openshift-monitoring pod/openshift-state-metrics-674774b7c4-5rzxj node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=openshift-state-metrics container exited with code 2 (Error): 
Nov 01 19:44:51.542 E ns/openshift-monitoring pod/grafana-69b9b6556-d6cd6 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=grafana-proxy container exited with code 2 (Error): 
Nov 01 19:44:51.551 E ns/openshift-monitoring pod/telemeter-client-68dbb6b644-wp2th node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=telemeter-client container exited with code 2 (Error): 
Nov 01 19:44:51.551 E ns/openshift-monitoring pod/telemeter-client-68dbb6b644-wp2th node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=reload container exited with code 2 (Error): 
Nov 01 19:44:51.572 E ns/openshift-marketplace pod/community-operators-d946d6555-nnllq node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=community-operators container exited with code 2 (Error): 
Nov 01 19:44:51.586 E ns/openshift-monitoring pod/prometheus-adapter-6cc66f59f9-z6rsx node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-adapter container exited with code 2 (Error): I1101 19:43:53.815870       1 adapter.go:93] successfully using in-cluster auth\nI1101 19:43:54.116332       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 19:44:51.619 E ns/openshift-monitoring pod/prometheus-adapter-6cc66f59f9-fgmks node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-adapter container exited with code 2 (Error): I1101 19:43:53.886506       1 adapter.go:93] successfully using in-cluster auth\nI1101 19:43:54.275784       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 19:44:51.652 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/01 19:43:54 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 01 19:44:51.652 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-proxy container exited with code 2 (Error): 2019/11/01 19:43:55 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 19:43:55 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 19:43:55 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 19:43:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/01 19:43:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 19:43:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 19:43:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 19:43:55 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 19:43:55 http.go:96: HTTPS: listening on [::]:9091\n
Nov 01 19:44:51.652 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-01T19:43:53.848248142Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-01T19:43:53.84835897Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-01T19:43:53.849569679Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-01T19:43:58.925721157Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 01 19:44:51.681 E ns/openshift-monitoring pod/alertmanager-main-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 19:43:54 Watching directory: "/etc/alertmanager/config"\n
Nov 01 19:44:51.681 E ns/openshift-monitoring pod/alertmanager-main-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 19:43:54 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 19:43:54 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 19:43:54 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 19:43:54 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 19:43:54 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 19:43:54 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 19:43:54 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 19:43:54 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 19:44:51.689 E ns/openshift-marketplace pod/certified-operators-54459f97bc-spst8 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=certified-operators container exited with code 2 (Error): 
Nov 01 19:44:51.703 E ns/openshift-monitoring pod/alertmanager-main-2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 19:43:54 Watching directory: "/etc/alertmanager/config"\n
Nov 01 19:44:51.703 E ns/openshift-monitoring pod/alertmanager-main-2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 19:43:55 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 19:43:55 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 19:43:55 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 19:43:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 19:43:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 19:43:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 19:43:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 19:43:55 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 19:44:51.712 E ns/openshift-ingress pod/router-default-7dbf667fc8-lhdbk node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=router container exited with code 2 (Error): E1101 19:43:46.235849       1 haproxy.go:395] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory\n2019-11-01T19:43:46.256Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:43:46.256Z	INFO	router.router	router/router.go:257	router is including routes in all namespaces\n2019-11-01T19:43:46.481Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:43:51.481Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:43:56.491Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:44:01.483Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:44:06.483Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:44:11.486Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:44:16.485Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:44:25.070Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:44:49.803Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 01 19:44:51.721 E ns/openshift-machine-config-operator pod/machine-config-daemon-cvz5m node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=oauth-proxy container exited with code 143 (Error): 
Nov 01 19:44:51.729 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-rctfj node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=kube-state-metrics container exited with code 2 (Error): 
Nov 01 19:44:51.735 E ns/openshift-marketplace pod/redhat-operators-8648c86cfb-mgt4g node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=redhat-operators container exited with code 2 (Error): 
Nov 01 19:45:02.603 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T19:45:00.626Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T19:45:00.630Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T19:45:00.631Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T19:45:00.631Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T19:45:00.631Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T19:45:00.631Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T19:45:00.631Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T19:45:00.631Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T19:45:00.631Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T19:45:00.631Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T19:45:00.631Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T19:45:00.631Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T19:45:00.631Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T19:45:00.631Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T19:45:00.632Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T19:45:00.632Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 19:45:02.615 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T19:45:00.982Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T19:45:00.985Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T19:45:00.985Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T19:45:00.986Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T19:45:00.986Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T19:45:00.986Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T19:45:00.986Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T19:45:00.986Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T19:45:00.986Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T19:45:00.986Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T19:45:00.986Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T19:45:00.986Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T19:45:00.986Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T19:45:00.986Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T19:45:00.986Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T19:45:00.986Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 19:46:44.808 E ns/openshift-marketplace pod/redhat-operators-8648c86cfb-7ljb7 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=redhat-operators container exited with code 2 (Error): 
Nov 01 19:46:44.940 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-gllpw node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=kube-state-metrics container exited with code 2 (Error): 
Nov 01 19:46:45.965 E ns/openshift-monitoring pod/alertmanager-main-2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 19:45:00 Watching directory: "/etc/alertmanager/config"\n
Nov 01 19:46:45.965 E ns/openshift-monitoring pod/alertmanager-main-2 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 19:45:01 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 19:45:01 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 19:45:01 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 19:45:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 19:45:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 19:45:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 19:45:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 19:45:01 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 19:46:45.984 E ns/openshift-monitoring pod/openshift-state-metrics-674774b7c4-lfn2d node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=openshift-state-metrics container exited with code 2 (Error): 
Nov 01 19:46:45.989 E ns/openshift-marketplace pod/certified-operators-54459f97bc-r7m8s node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=certified-operators container exited with code 2 (Error): 
Nov 01 19:46:46.013 E ns/openshift-monitoring pod/prometheus-adapter-6cc66f59f9-kcv4p node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-adapter container exited with code 2 (Error): I1101 19:44:59.748114       1 adapter.go:93] successfully using in-cluster auth\nI1101 19:45:00.248758       1 secure_serving.go:116] Serving securely on [::]:6443\n
Nov 01 19:46:46.037 E ns/openshift-monitoring pod/grafana-69b9b6556-4s6fn node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=grafana-proxy container exited with code 2 (Error): 
Nov 01 19:46:46.082 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/01 19:45:01 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 01 19:46:46.082 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-proxy container exited with code 2 (Error): 2019/11/01 19:45:02 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 19:45:02 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 19:45:02 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 19:45:02 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/01 19:45:02 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 19:45:02 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 19:45:02 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 19:45:02 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 19:45:02 http.go:96: HTTPS: listening on [::]:9091\n2019/11/01 19:45:06 oauthproxy.go:774: basicauth: 10.129.0.206:42792 Authorization header does not start with 'Basic', skipping basic authentication\n
Nov 01 19:46:46.082 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-01T19:45:01.414313624Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-01T19:45:01.41438507Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-01T19:45:01.415173446Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-01T19:45:06.485043202Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 01 19:47:03.014 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T19:47:00.425Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T19:47:00.427Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T19:47:00.427Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T19:47:00.428Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T19:47:00.428Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T19:47:00.428Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T19:47:00.428Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T19:47:00.428Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T19:47:00.428Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T19:47:00.428Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T19:47:00.428Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T19:47:00.428Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T19:47:00.428Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T19:47:00.428Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T19:47:00.428Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T19:47:00.428Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 19:47:03.024 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T19:47:00.699Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T19:47:00.701Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T19:47:00.701Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T19:47:00.701Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T19:47:00.701Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T19:47:00.702Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T19:47:00.702Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T19:47:00.702Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T19:47:00.702Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T19:47:00.702Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T19:47:00.702Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T19:47:00.702Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T19:47:00.702Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T19:47:00.702Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T19:47:00.702Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T19:47:00.702Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 20:00:31.081 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Nov 01 20:06:33.305 E ns/openshift-marketplace pod/certified-operators-54459f97bc-wgjwm node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=certified-operators container exited with code 2 (Error): 
Nov 01 20:06:33.320 E ns/openshift-monitoring pod/alertmanager-main-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=config-reloader container exited with code 2 (Error): 2019/11/01 19:47:00 Watching directory: "/etc/alertmanager/config"\n
Nov 01 20:06:33.320 E ns/openshift-monitoring pod/alertmanager-main-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=alertmanager-proxy container exited with code 2 (Error): 2019/11/01 19:47:00 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 19:47:00 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 19:47:00 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 19:47:00 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2019/11/01 19:47:00 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 19:47:00 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2019/11/01 19:47:00 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 19:47:00 http.go:96: HTTPS: listening on [::]:9095\n
Nov 01 20:06:33.326 E ns/openshift-marketplace pod/redhat-operators-8648c86cfb-qdtzd node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=redhat-operators container exited with code 2 (Error): 
Nov 01 20:06:33.343 E ns/openshift-monitoring pod/telemeter-client-68dbb6b644-7m67c node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=telemeter-client container exited with code 2 (Error): 
Nov 01 20:06:33.343 E ns/openshift-monitoring pod/telemeter-client-68dbb6b644-7m67c node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=reload container exited with code 2 (Error): 
Nov 01 20:06:33.355 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=rules-configmap-reloader container exited with code 2 (Error): 2019/11/01 19:47:01 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Nov 01 20:06:33.355 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-proxy container exited with code 2 (Error): 2019/11/01 19:47:01 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 19:47:01 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2019/11/01 19:47:01 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2019/11/01 19:47:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2019/11/01 19:47:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2019/11/01 19:47:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2019/11/01 19:47:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2019/11/01 19:47:01 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2019/11/01 19:47:01 http.go:96: HTTPS: listening on [::]:9091\n
Nov 01 20:06:33.355 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-config-reloader container exited with code 2 (Error): ts=2019-11-01T19:47:00.81771843Z caller=main.go:85 msg="Starting prometheus-config-reloader version '0.34.0'."\nlevel=info ts=2019-11-01T19:47:00.81781114Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2019-11-01T19:47:00.843800054Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2019-11-01T19:47:05.884142569Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Nov 01 20:06:33.430 E ns/openshift-monitoring pod/grafana-69b9b6556-28npn node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=grafana-proxy container exited with code 2 (Error): 
Nov 01 20:06:33.449 E ns/openshift-marketplace pod/community-operators-d946d6555-f2lrd node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=community-operators container exited with code 2 (Error): 
Nov 01 20:06:33.465 E ns/openshift-monitoring pod/openshift-state-metrics-674774b7c4-6hr89 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=openshift-state-metrics container exited with code 2 (Error): 
Nov 01 20:06:33.496 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 20:06:33.496 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 20:06:33.496 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 20:06:33.496 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 20:06:33.496 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 20:06:33.496 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 20:06:33.496 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Nov 01 20:06:33.503 E ns/openshift-monitoring pod/kube-state-metrics-54874f8db8-4cnxb node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=kube-state-metrics container exited with code 2 (Error): 
Nov 01 20:06:33.511 E ns/openshift-ingress pod/router-default-7dbf667fc8-w578j node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=router container exited with code 2 (Error): 2019-11-01T19:50:14.035Z	INFO	router.router	router/template.go:293	starting router	{"version": "v0.0.0-master+$Format:%h$"}\n2019-11-01T19:50:14.036Z	INFO	router.metrics	metrics/metrics.go:153	router health and metrics port listening on HTTP and HTTPS	{"address": "0.0.0.0:1936"}\n2019-11-01T19:50:14.040Z	INFO	router.template	template/router.go:294	watching for changes	{"path": "/etc/pki/tls/private"}\nE1101 19:50:14.041162       1 haproxy.go:395] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory\n2019-11-01T19:50:14.058Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:50:14.058Z	INFO	router.router	router/router.go:257	router is including routes in all namespaces\n2019-11-01T19:50:14.288Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:50:30.801Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:51:50.181Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n2019-11-01T19:51:55.179Z	INFO	router.template	template/router.go:548	router reloaded	{"output": " - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"}\n
Nov 01 20:08:04.632 E ns/openshift-monitoring pod/prometheus-k8s-1 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T20:08:01.894Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T20:08:01.902Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T20:08:01.902Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T20:08:01.903Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T20:08:01.903Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T20:08:01.903Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T20:08:01.903Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T20:08:01.903Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T20:08:01.903Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T20:08:01.903Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T20:08:01.903Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T20:08:01.903Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T20:08:01.903Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T20:08:01.903Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T20:08:01.904Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T20:08:01.904Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01
Nov 01 20:08:04.641 E ns/openshift-monitoring pod/prometheus-k8s-0 node/worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2019-11-01T20:08:02.488Z caller=web.go:450 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2019-11-01T20:08:02.492Z caller=head.go:514 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2019-11-01T20:08:02.492Z caller=head.go:562 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2019-11-01T20:08:02.492Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2019-11-01T20:08:02.493Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2019-11-01T20:08:02.493Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2019-11-01T20:08:02.493Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2019-11-01T20:08:02.493Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2019-11-01T20:08:02.493Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2019-11-01T20:08:02.493Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2019-11-01T20:08:02.493Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2019-11-01T20:08:02.493Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2019-11-01T20:08:02.493Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2019-11-01T20:08:02.493Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2019-11-01T20:08:02.493Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2019-11-01T20:08:02.493Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2019-11-01

				
				Click to see stdout/stderrfrom junit_e2e_20191101-200917.xml

Find was mentions in log files


openshift-tests [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s] 3.70s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-apps\]\sDaemon\sset\s\[Serial\]\sshould\srollback\swithout\sunnecessary\srestarts\s\[Conformance\]\s\[Suite\:openshift\/conformance\/serial\/minimal\]\s\[Suite\:k8s\]$'
fail [k8s.io/kubernetes/test/e2e/apps/daemon_set.go:385]: Conformance test suite needs a cluster with at least 2 nodes.
Expected
    <int>: 1
to be >
    <int>: 1
				
				Click to see stdout/stderrfrom junit_e2e_20191101-200917.xml

Filter through log files


openshift-tests [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s] 10m5s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-scheduling\]\sSchedulerPredicates\s\[Serial\]\svalidates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun\s\s\[Conformance\]\s\[Suite\:openshift\/conformance\/serial\/minimal\]\s\[Suite\:k8s\]$'
fail [k8s.io/kubernetes/test/e2e/framework/util.go:2840]: Nov  1 20:02:01.861: Timed out after 10m0s waiting for stable cluster.
				
				Click to see stdout/stderrfrom junit_e2e_20191101-200917.xml

Filter through log files


openshift-tests [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching [Suite:openshift/conformance/serial] [Suite:k8s] 10m5s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-scheduling\]\sSchedulerPredicates\s\[Serial\]\svalidates\sthat\sNodeAffinity\sis\srespected\sif\snot\smatching\s\[Suite\:openshift\/conformance\/serial\]\s\[Suite\:k8s\]$'
fail [k8s.io/kubernetes/test/e2e/framework/util.go:2840]: Nov  1 19:25:20.276: Timed out after 10m0s waiting for stable cluster.
				
				Click to see stdout/stderrfrom junit_e2e_20191101-200917.xml

Filter through log files


openshift-tests [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s] 10m5s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-scheduling\]\sSchedulerPredicates\s\[Serial\]\svalidates\sthat\sNodeSelector\sis\srespected\sif\snot\smatching\s\s\[Conformance\]\s\[Suite\:openshift\/conformance\/serial\/minimal\]\s\[Suite\:k8s\]$'
fail [k8s.io/kubernetes/test/e2e/framework/util.go:2840]: Nov  1 19:35:40.822: Timed out after 10m0s waiting for stable cluster.
				
				Click to see stdout/stderrfrom junit_e2e_20191101-200917.xml

Filter through log files


openshift-tests [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms [Suite:openshift/conformance/serial] [Suite:k8s] 1m37s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-scheduling\]\sSchedulerPriorities\s\[Serial\]\sPod\sshould\sbe\sscheduled\sto\snode\sthat\sdon\'t\smatch\sthe\sPodAntiAffinity\sterms\s\[Suite\:openshift\/conformance\/serial\]\s\[Suite\:k8s\]$'
fail [k8s.io/kubernetes/test/e2e/framework/util.go:1167]: Expected
    <string>: worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com
not to equal
    <string>: worker-1.ci-op-s8qkptjz-6c024.origin-ci-int-aws.dev.rhcloud.com
				
				Click to see stdout/stderrfrom junit_e2e_20191101-200917.xml

Filter through log files


operator Run template e2e-metal-serial - e2e-metal-serial container test 1h29m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=operator\sRun\stemplate\se2e\-metal\-serial\s\-\se2e\-metal\-serial\scontainer\stest$'
 01 20:08:25.079 W ns/openshift-marketplace pod/community-operators-d946d6555-n5lwr Readiness probe failed: timeout: failed to connect service "localhost:50051" within 1s\n
Nov 01 20:08:25.279 W ns/openshift-marketplace pod/certified-operators-54459f97bc-8glw4 Liveness probe failed: timeout: failed to connect service "localhost:50051" within 1s\n
Nov 01 20:08:29.484 W ns/openshift-marketplace pod/community-operators-d946d6555-n5lwr Liveness probe failed: timeout: failed to connect service "localhost:50051" within 1s\n (2 times)
Nov 01 20:08:29.681 W ns/openshift-marketplace pod/community-operators-d946d6555-n5lwr Readiness probe failed: timeout: failed to connect service "localhost:50051" within 1s\n (2 times)
Nov 01 20:08:31.019 W ns/openshift-marketplace pod/community-operators-d946d6555-n5lwr Liveness probe failed: timeout: failed to connect service "localhost:50051" within 1s\n (3 times)
Nov 01 20:08:32.736 W ns/openshift-marketplace pod/community-operators-d946d6555-n5lwr Readiness probe failed: timeout: failed to connect service "localhost:50051" within 1s\n (3 times)

Failing tests:

[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]
[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]
[sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching [Suite:openshift/conformance/serial] [Suite:k8s]
[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]
[sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms [Suite:openshift/conformance/serial] [Suite:k8s]

Writing JUnit report to /tmp/artifacts/junit/junit_e2e_20191101-200917.xml

error: 5 fail, 42 pass, 166 skip (1h29m4s)

				from junit_operator.xml

Filter through log files


Show 46 Passed Tests

Show 166 Skipped Tests