ResultFAILURE
Tests 5 failed / 18 succeeded
Started2020-02-18 13:30
Elapsed1h24m
Work namespaceci-op-16llxmvs
Refs release-4.3:3ce21b38
298:666618c0
podb0cea47a-5252-11ea-813e-0a58ac104e5c
repoopenshift/cluster-api-provider-aws
revision1

Test Failures


Cluster upgrade control-plane-upgrade 35m17s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\scontrol\-plane\-upgrade$'
Feb 18 14:44:19.283: API was unreachable during upgrade for at least 2m9s:

Feb 18 14:16:40.247 E kube-apiserver Kube API started failing: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 18.144.149.178:6443: connect: connection refused
Feb 18 14:16:40.247 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 18.144.149.178:6443: connect: connection refused
Feb 18 14:16:41.090 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:16:41.090 - 1s    E kube-apiserver Kube API is not responding to GET requests
Feb 18 14:16:41.518 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:16:42.165 I kube-apiserver Kube API started responding to GET requests
Feb 18 14:17:33.178 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:17:34.090 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:17:34.170 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:23:19.090 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 18 14:23:20.090 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:23:21.032 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:24:05.090 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 18 14:24:05.165 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:33:10.090 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 18 14:33:11.090 - 29s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:33:40.165 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:36:04.090 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 18 14:36:04.163 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:36:20.090 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 18 14:36:20.163 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:36:23.293 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:36:24.090 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:36:29.509 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:36:32.508 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:36:33.090 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:36:35.654 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:36:38.652 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:36:38.726 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:36:41.724 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:36:42.090 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:36:44.872 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:36:47.870 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:36:48.090 - 12s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:37:00.229 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:37:03.228 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:37:04.090 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:37:18.661 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:37:21.660 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:37:22.090 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:37:24.806 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:37:27.804 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:37:27.878 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:37:35.548 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:37:36.090 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:37:41.766 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:37:44.764 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:37:45.090 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:37:47.913 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:37:57.052 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:37:57.090 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:37:57.126 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:38:00.124 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:38:00.198 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:38:06.268 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:38:06.352 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:38:57.571 E kube-apiserver Kube API started failing: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: unexpected EOF
Feb 18 14:38:57.571 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: unexpected EOF
Feb 18 14:38:58.090 - 8s    E kube-apiserver Kube API is not responding to GET requests
Feb 18 14:38:58.090 - 8s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:39:06.972 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:39:06.984 I kube-apiserver Kube API started responding to GET requests
Feb 18 14:39:22.090 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 18 14:39:22.164 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:39:39.090 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 18 14:39:40.090 - 12s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:39:53.806 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:39:56.799 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:39:57.090 - 4s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:40:03.018 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:40:06.015 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:40:06.089 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:40:09.087 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:40:09.090 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:40:09.161 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:40:12.159 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:40:13.090 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:40:15.318 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:40:18.303 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:40:19.090 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:40:21.452 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:40:24.449 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:40:25.090 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:40:27.594 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:40:33.663 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:40:33.738 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:40:36.735 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:40:37.090 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:40:37.167 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:40:39.808 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:40:40.090 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:40:40.166 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:40:42.879 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:40:42.954 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 14:40:49.024 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 14:40:49.090 - 6s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:40:55.248 I openshift-apiserver OpenShift API started responding to GET requests

github.com/openshift/origin/test/extended/util/disruption/controlplane.(*AvailableTest).Test(0xa55ec38, 0xc0038dab40, 0xc004532ae0, 0x2)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/disruption/controlplane/controlplane.go:56 +0x68d
github.com/openshift/origin/test/extended/util/disruption.(*chaosMonkeyAdapter).Test(0xc0035a5b80, 0xc0050c5c60)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/disruption/disruption.go:133 +0x3c6
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0050c5c60, 0xc0050c8d50)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgrade_1582037068.xml

Filter through log files


Cluster upgrade k8s-service-upgrade 35m17s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sk8s\-service\-upgrade$'
Feb 18 14:44:19.283: Service was unreachable during upgrade for at least 1m6s:

Feb 18 14:10:50.395 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:10:51.388 - 1s    E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:10:52.554 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:10:53.394 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:10:54.388 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:10:54.567 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:10:55.408 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:10:56.388 - 999ms E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:10:57.552 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:10:58.395 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:10:59.388 - 999ms E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:11:00.537 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:11:03.396 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:11:04.388 - 999ms E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:11:05.555 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:11:08.395 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:11:09.388 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:11:09.535 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:11:10.415 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:11:11.388 - 1s    E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:11:12.538 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:11:15.395 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:11:16.388 - 3s    E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:11:19.539 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:11:27.396 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:11:28.388 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:11:28.538 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:11:30.395 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:11:31.388 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:11:31.543 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:11:32.396 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:11:33.388 - 999ms E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:11:34.535 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:11:36.403 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:11:37.388 - 1s    E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:11:38.537 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:11:39.395 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:11:40.388 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:11:40.535 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:24:30.388 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests on reused connections
Feb 18 14:24:30.388 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:24:30.544 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests on reused connections
Feb 18 14:24:30.545 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:36:24.388 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:36:24.549 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections
Feb 18 14:36:35.388 E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 14:36:36.388 - 34s   E ns/e2e-k8s-service-upgrade-7064 svc/service-test Service is not responding to GET requests over new connections
Feb 18 14:37:10.863 I ns/e2e-k8s-service-upgrade-7064 svc/service-test Service started responding to GET requests over new connections

github.com/openshift/origin/test/e2e/upgrade/service.(*UpgradeTest).Test(0xc00461f950, 0xc0038dadc0, 0xc004532ae0, 0x2)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/e2e/upgrade/service/service.go:124 +0xb37
github.com/openshift/origin/test/extended/util/disruption.(*chaosMonkeyAdapter).Test(0xc0035a5bc0, 0xc0050c5c80)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/disruption/disruption.go:133 +0x3c6
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0050c5c80, 0xc0050c8d60)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgrade_1582037068.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 35m32s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
254 error level events were detected during this test run:

Feb 18 14:09:00.804 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-108.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-18T14:08:59.285Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-18T14:08:59.289Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-18T14:08:59.290Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-18T14:08:59.292Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-18T14:08:59.292Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-18T14:08:59.292Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-18T14:08:59.292Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-18T14:08:59.292Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-18T14:08:59.292Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-18T14:08:59.292Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-18T14:08:59.292Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-18T14:08:59.292Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-18T14:08:59.292Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-18T14:08:59.292Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-18T14:08:59.293Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-18T14:08:59.293Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-18
Feb 18 14:11:04.218 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 508)
Feb 18 14:11:05.619 E ns/openshift-cluster-version pod/cluster-version-operator-649884f68c-r4hpc node/ip-10-0-133-8.us-west-1.compute.internal container=cluster-version-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:11:25.674 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-c4f947856-lxrfc node/ip-10-0-133-8.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): :48.237855       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ce2ebcbd-32aa-42cb-8849-82213087e3d5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 4"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 2; 2 nodes are at revision 4" to "Available: 3 nodes are active; 3 nodes are at revision 4"\nI0218 14:08:50.214757       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ce2ebcbd-32aa-42cb-8849-82213087e3d5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-4 -n openshift-kube-apiserver: cause by changes in data.status\nI0218 14:08:58.622504       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ce2ebcbd-32aa-42cb-8849-82213087e3d5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-4-ip-10-0-145-111.us-west-1.compute.internal -n openshift-kube-apiserver because it was missing\nW0218 14:11:05.074454       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18458 (18542)\nW0218 14:11:23.363714       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18542 (18663)\nI0218 14:11:25.006756       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 14:11:25.006842       1 leaderelection.go:66] leaderelection lost\n
Feb 18 14:13:02.974 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7f46dcc454-z427c node/ip-10-0-133-8.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:14:39.227 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-5dc8854fc8-tn5mv node/ip-10-0-133-8.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): flector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 15693 (16826)\nW0218 14:08:28.900239       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 13140 (16365)\nW0218 14:08:29.561430       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 10438 (16359)\nW0218 14:08:29.586601       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 10647 (15729)\nW0218 14:08:29.631935       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 15688 (16292)\nW0218 14:08:29.632161       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 15667 (15729)\nW0218 14:08:29.634096       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 9650 (15729)\nW0218 14:08:29.643416       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.APIServer ended with: too old resource version: 9647 (16284)\nW0218 14:11:05.059472       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18458 (18542)\nW0218 14:11:23.157854       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18542 (18663)\nI0218 14:14:38.369933       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 14:14:38.370008       1 leaderelection.go:66] leaderelection lost\n
Feb 18 14:16:41.132 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-8.us-west-1.compute.internal node/ip-10-0-133-8.us-west-1.compute.internal container=kube-apiserver-5 container exited with code 1 (Error): g   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default "10.0.0.0/24")\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Feb 18 14:18:14.577 E ns/openshift-image-registry pod/image-registry-6684464bfb-vqkhb node/ip-10-0-136-19.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:15.078 E ns/openshift-image-registry pod/image-registry-6684464bfb-cb78p node/ip-10-0-141-240.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:15.428 E ns/openshift-image-registry pod/image-registry-6684464bfb-m77jb node/ip-10-0-136-19.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:15.776 E ns/openshift-image-registry pod/image-registry-6684464bfb-bsdrr node/ip-10-0-141-240.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:16.052 E ns/openshift-image-registry pod/image-registry-6684464bfb-8bfsj node/ip-10-0-149-108.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:17.025 E ns/openshift-image-registry pod/image-registry-6684464bfb-kq549 node/ip-10-0-149-108.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:17.608 E ns/openshift-image-registry pod/image-registry-6684464bfb-94qw7 node/ip-10-0-149-108.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:18.049 E ns/openshift-image-registry pod/image-registry-6684464bfb-j6z6h node/ip-10-0-136-19.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:18.816 E ns/openshift-image-registry pod/image-registry-6684464bfb-62n22 node/ip-10-0-136-19.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:19.585 E ns/openshift-monitoring pod/node-exporter-5t9tr node/ip-10-0-141-240.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 2-18T14:01:08Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:08Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 14:18:19.599 E ns/openshift-image-registry pod/image-registry-6684464bfb-gtccw node/ip-10-0-149-108.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:20.196 E ns/openshift-image-registry pod/image-registry-6684464bfb-9rjsx node/ip-10-0-141-240.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:21.246 E ns/openshift-image-registry pod/image-registry-6684464bfb-d5q9w node/ip-10-0-141-240.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:21.907 E ns/openshift-image-registry pod/image-registry-6684464bfb-vmhgr node/ip-10-0-136-19.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:22.701 E ns/openshift-image-registry pod/image-registry-6684464bfb-h7cgb node/ip-10-0-149-108.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:23.289 E ns/openshift-image-registry pod/image-registry-6684464bfb-pdtjx node/ip-10-0-149-108.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:23.641 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:23.641 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:23.641 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:23.641 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:23.641 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:23.641 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:23.641 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:23.867 E ns/openshift-image-registry pod/image-registry-6684464bfb-8qcrl node/ip-10-0-136-19.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:24.854 E ns/openshift-image-registry pod/image-registry-6684464bfb-hbs2x node/ip-10-0-141-240.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:25.426 E ns/openshift-image-registry pod/image-registry-6684464bfb-cgn6c node/ip-10-0-136-19.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:26.244 E ns/openshift-image-registry pod/image-registry-6684464bfb-ql72r node/ip-10-0-141-240.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:27.533 E ns/openshift-image-registry pod/image-registry-6684464bfb-hfcc7 node/ip-10-0-141-240.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:29.015 E ns/openshift-image-registry pod/image-registry-6684464bfb-f5wmv node/ip-10-0-136-19.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:29.829 E ns/openshift-image-registry pod/image-registry-6684464bfb-frxth node/ip-10-0-149-108.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:30.331 E ns/openshift-image-registry pod/image-registry-6684464bfb-msg95 node/ip-10-0-141-240.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:31.297 E ns/openshift-image-registry pod/image-registry-6684464bfb-q4qll node/ip-10-0-136-19.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:32.345 E ns/openshift-image-registry pod/image-registry-6684464bfb-v9xrq node/ip-10-0-149-108.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:33.554 E ns/openshift-image-registry pod/image-registry-6684464bfb-225gr node/ip-10-0-141-240.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:34.310 E ns/openshift-image-registry pod/image-registry-6684464bfb-d4nbc node/ip-10-0-141-240.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:35.785 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-136-19.us-west-1.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:35.785 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-136-19.us-west-1.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:35.785 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-136-19.us-west-1.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:37.076 E ns/openshift-ingress-operator pod/ingress-operator-6d4d88674-d27q9 node/ip-10-0-145-111.us-west-1.compute.internal container=ingress-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:37.076 E ns/openshift-ingress-operator pod/ingress-operator-6d4d88674-d27q9 node/ip-10-0-145-111.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:37.318 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-6d69svznz node/ip-10-0-145-111.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:40.458 E ns/openshift-service-ca-operator pod/service-ca-operator-8cdc68b8-9n59m node/ip-10-0-133-8.us-west-1.compute.internal container=operator container exited with code 255 (Error): 
Feb 18 14:18:44.634 E ns/openshift-console pod/downloads-79674798d7-4pmvq node/ip-10-0-145-111.us-west-1.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:46.270 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:46.270 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:46.270 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:46.270 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:46.270 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:46.270 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:46.270 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:50.465 E ns/openshift-operator-lifecycle-manager pod/packageserver-6d96fc6f47-jf7d2 node/ip-10-0-133-37.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:51.509 E ns/openshift-monitoring pod/kube-state-metrics-86f799759-djth8 node/ip-10-0-136-19.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Feb 18 14:18:52.534 E ns/openshift-monitoring pod/openshift-state-metrics-77558c9d99-svffn node/ip-10-0-136-19.us-west-1.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 18 14:18:52.680 E ns/openshift-monitoring pod/telemeter-client-6ffb4c7c79-74d5w node/ip-10-0-141-240.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Feb 18 14:18:52.680 E ns/openshift-monitoring pod/telemeter-client-6ffb4c7c79-74d5w node/ip-10-0-141-240.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Feb 18 14:18:53.987 E ns/openshift-marketplace pod/redhat-operators-8c558c56-tkqvb node/ip-10-0-136-19.us-west-1.compute.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:18:57.698 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-240.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/18 14:06:53 Watching directory: "/etc/alertmanager/config"\n
Feb 18 14:18:57.698 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-240.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/18 14:06:54 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/18 14:06:54 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 14:06:54 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 14:06:54 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/18 14:06:54 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/18 14:06:54 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/18 14:06:54 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/18 14:06:54 http.go:96: HTTPS: listening on [::]:9095\n
Feb 18 14:18:59.541 E ns/openshift-monitoring pod/node-exporter-8nlb9 node/ip-10-0-133-8.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 2-18T14:01:09Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:09Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 14:19:07.611 E ns/openshift-apiserver pod/apiserver-vskn5 node/ip-10-0-133-37.us-west-1.compute.internal container=openshift-apiserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:19:07.746 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-240.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-18T14:19:00.152Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-18T14:19:00.161Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-18T14:19:00.162Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-18T14:19:00.163Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-18T14:19:00.163Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-18T14:19:00.163Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-18T14:19:00.163Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-18T14:19:00.163Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-18T14:19:00.163Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-18T14:19:00.163Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-18T14:19:00.163Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-18T14:19:00.163Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-18T14:19:00.163Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-18T14:19:00.163Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-18T14:19:00.164Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-18T14:19:00.164Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-18
Feb 18 14:19:12.637 E ns/openshift-controller-manager pod/controller-manager-4twhb node/ip-10-0-133-8.us-west-1.compute.internal container=controller-manager container exited with code 137 (OOMKilled): 
Feb 18 14:19:15.813 E ns/openshift-monitoring pod/thanos-querier-6c5c559cc7-c4g9v node/ip-10-0-141-240.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/18 14:07:30 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/18 14:07:30 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 14:07:30 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 14:07:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/18 14:07:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/18 14:07:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/18 14:07:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/18 14:07:30 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/18 14:07:30 http.go:96: HTTPS: listening on [::]:9091\n
Feb 18 14:19:19.711 E ns/openshift-monitoring pod/node-exporter-qjn9j node/ip-10-0-136-19.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 2-18T14:01:22Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T14:01:22Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 14:19:22.718 E ns/openshift-marketplace pod/redhat-operators-5596755ff9-d6rzw node/ip-10-0-136-19.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Feb 18 14:19:46.813 E ns/openshift-marketplace pod/community-operators-7fd8b5d5c9-f2mmd node/ip-10-0-136-19.us-west-1.compute.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:19:49.808 E ns/openshift-ingress pod/router-default-74bdc69f5f-r426t node/ip-10-0-136-19.us-west-1.compute.internal container=router container exited with code 2 (Error): //localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:18:46.198689       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:18:51.162220       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:18:56.194245       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:01.165806       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:06.166666       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nE0218 14:19:11.173167       1 limiter.go:140] error reloading router: waitid: no child processes\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0218 14:19:16.171941       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:21.161639       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:26.171302       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:45.484008       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 18 14:20:05.318 E ns/openshift-console-operator pod/console-operator-567b574489-2ks2g node/ip-10-0-145-111.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): :299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20681 (21524)\nW0218 14:18:32.918704       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20657 (21524)\nW0218 14:18:32.918925       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20648 (21524)\nW0218 14:18:33.090422       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20657 (21585)\nW0218 14:18:33.090662       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 17988 (21700)\nW0218 14:18:33.166529       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 18028 (21858)\nW0218 14:18:33.294847       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 19856 (20681)\nW0218 14:18:34.572966       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 18031 (24825)\nW0218 14:18:34.621631       1 reflector.go:299] github.com/openshift/client-go/console/informers/externalversions/factory.go:101: watch of *v1.ConsoleCLIDownload ended with: too old resource version: 18032 (24836)\nW0218 14:18:34.673227       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 17989 (24838)\nI0218 14:20:04.523526       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 14:20:04.524018       1 leaderelection.go:66] leaderelection lost\nF0218 14:20:04.529112       1 builder.go:217] server exited\n
Feb 18 14:20:06.488 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-108.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-18T14:19:56.932Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-18T14:19:56.936Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-18T14:19:56.936Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-18T14:19:56.938Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-18T14:19:56.938Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-18T14:19:56.938Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-18T14:19:56.938Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-18T14:19:56.938Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-18T14:19:56.938Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-18T14:19:56.938Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-18T14:19:56.938Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-18T14:19:56.938Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-18T14:19:56.938Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-18T14:19:56.938Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-18T14:19:56.939Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-18T14:19:56.939Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-18
Feb 18 14:20:16.351 E ns/openshift-controller-manager pod/controller-manager-2j484 node/ip-10-0-145-111.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Feb 18 14:20:17.443 E ns/openshift-ingress pod/router-default-74bdc69f5f-hrxp2 node/ip-10-0-149-108.us-west-1.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:16.202558       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:21.194572       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:26.173863       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:45.461910       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:50.457036       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:19:55.458383       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:20:00.458224       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:20:05.458553       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:20:10.458219       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:20:15.462176       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 18 14:21:05.381 E ns/openshift-controller-manager pod/controller-manager-lz6lf node/ip-10-0-133-37.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Feb 18 14:21:29.591 E ns/openshift-console pod/console-769f45b4f6-bvh94 node/ip-10-0-145-111.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/02/18 14:05:33 cmd/main: cookies are secure!\n2020/02/18 14:05:33 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/02/18 14:05:43 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/02/18 14:05:53 cmd/main: Binding to [::]:8443...\n2020/02/18 14:05:53 cmd/main: using TLS\n
Feb 18 14:21:41.520 E ns/openshift-console pod/console-769f45b4f6-f47sl node/ip-10-0-133-37.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/02/18 14:04:47 cmd/main: cookies are secure!\n2020/02/18 14:04:47 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/02/18 14:04:57 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/02/18 14:05:07 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/02/18 14:05:17 cmd/main: Binding to [::]:8443...\n2020/02/18 14:05:17 cmd/main: using TLS\n
Feb 18 14:22:41.288 E ns/openshift-sdn pod/sdn-controller-qw6j9 node/ip-10-0-133-8.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): 115664 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-843"\nI0218 14:09:01.553372       1 vnids.go:115] Allocated netid 10929154 for namespace "e2e-k8s-service-upgrade-7064"\nI0218 14:09:01.571082       1 vnids.go:115] Allocated netid 10573421 for namespace "e2e-control-plane-upgrade-4918"\nI0218 14:09:01.584002       1 vnids.go:115] Allocated netid 5420143 for namespace "e2e-k8s-sig-apps-job-upgrade-7373"\nI0218 14:09:01.612216       1 vnids.go:115] Allocated netid 13593123 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-8425"\nI0218 14:09:01.629882       1 vnids.go:115] Allocated netid 9435705 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-3473"\nE0218 14:14:45.844087       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://api-int.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=19830&timeout=7m4s&timeoutSeconds=424&watch=true: dial tcp 10.0.140.44:6443: connect: connection refused\nW0218 14:14:46.993408       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 17394 (19850)\nW0218 14:14:47.003247       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 15694 (19852)\nW0218 14:18:33.499450       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 17626 (20682)\nW0218 14:18:35.318429       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 19852 (24760)\nW0218 14:18:35.318569       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 19850 (24808)\n
Feb 18 14:22:47.282 E ns/openshift-sdn pod/sdn-smzvq node/ip-10-0-136-19.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 0218 14:21:41.065890    2617 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:21:41.133768    2617 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:21:41.133798    2617 proxier.go:350] userspace syncProxyRules took 67.880122ms\nI0218 14:21:41.133816    2617 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:22:11.134086    2617 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:22:11.299894    2617 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:22:11.368482    2617 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:22:11.368511    2617 proxier.go:350] userspace syncProxyRules took 68.587231ms\nI0218 14:22:11.368529    2617 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:22:38.782721    2617 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.4:6443 10.129.0.3:6443]\nI0218 14:22:38.782767    2617 roundrobin.go:218] Delete endpoint 10.130.0.13:6443 for service "openshift-multus/multus-admission-controller:"\nI0218 14:22:38.782830    2617 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:22:38.955835    2617 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:22:39.024200    2617 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:22:39.024225    2617 proxier.go:350] userspace syncProxyRules took 68.364422ms\nI0218 14:22:39.024237    2617 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:22:46.426950    2617 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nE0218 14:22:46.426989    2617 metrics.go:133] failed to dump OVS flows for metrics: exit status 1\nI0218 14:22:46.748867    2617 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0218 14:22:46.748915    2617 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 18 14:23:04.877 E ns/openshift-sdn pod/sdn-controller-x4zf5 node/ip-10-0-145-111.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0218 13:51:56.588645       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 18 14:23:07.430 E ns/openshift-sdn pod/sdn-ctns4 node/ip-10-0-133-8.us-west-1.compute.internal container=sdn container exited with code 255 (Error): syncProxyRules complete\nI0218 14:22:39.074958    3096 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:22:39.074990    3096 proxier.go:350] userspace syncProxyRules took 75.148095ms\nI0218 14:22:39.075004    3096 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:22:49.757613    3096 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-upgrade-7064/service-test: to [10.131.0.13:80]\nI0218 14:22:49.757660    3096 roundrobin.go:218] Delete endpoint 10.129.2.20:80 for service "e2e-k8s-service-upgrade-7064/service-test:"\nI0218 14:22:49.757736    3096 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:22:49.942940    3096 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:22:50.017362    3096 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:22:50.017386    3096 proxier.go:350] userspace syncProxyRules took 74.421094ms\nI0218 14:22:50.017397    3096 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:23:00.766307    3096 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-upgrade-7064/service-test: to [10.129.2.20:80 10.131.0.13:80]\nI0218 14:23:00.766349    3096 roundrobin.go:218] Delete endpoint 10.129.2.20:80 for service "e2e-k8s-service-upgrade-7064/service-test:"\nI0218 14:23:00.766410    3096 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:23:01.040958    3096 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:23:01.126973    3096 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:23:01.127002    3096 proxier.go:350] userspace syncProxyRules took 86.016759ms\nI0218 14:23:01.127018    3096 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:23:06.473260    3096 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0218 14:23:06.473327    3096 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 18 14:23:09.426 E ns/openshift-multus pod/multus-admission-controller-6pm8d node/ip-10-0-133-8.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 18 14:23:09.780 E ns/openshift-multus pod/multus-w2l2t node/ip-10-0-149-108.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 18 14:23:12.373 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:23:32.400 E ns/openshift-sdn pod/sdn-kdr2k node/ip-10-0-141-240.us-west-1.compute.internal container=sdn container exited with code 255 (Error): Rules complete\nI0218 14:22:50.207685    3774 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:22:50.207716    3774 proxier.go:350] userspace syncProxyRules took 232.545933ms\nI0218 14:22:50.207733    3774 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:23:00.766141    3774 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-upgrade-7064/service-test: to [10.129.2.20:80 10.131.0.13:80]\nI0218 14:23:00.766173    3774 roundrobin.go:218] Delete endpoint 10.129.2.20:80 for service "e2e-k8s-service-upgrade-7064/service-test:"\nI0218 14:23:00.766238    3774 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:23:00.936766    3774 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:23:01.006109    3774 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:23:01.006135    3774 proxier.go:350] userspace syncProxyRules took 69.345722ms\nI0218 14:23:01.006147    3774 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:23:15.609385    3774 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.4:6443]\nI0218 14:23:15.609416    3774 roundrobin.go:218] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0218 14:23:15.609460    3774 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:23:15.788826    3774 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:23:15.859780    3774 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:23:15.859804    3774 proxier.go:350] userspace syncProxyRules took 70.953473ms\nI0218 14:23:15.859815    3774 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:23:31.967558    3774 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0218 14:23:31.967606    3774 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 18 14:24:00.996 E ns/openshift-sdn pod/sdn-btbfg node/ip-10-0-133-37.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 15.837919   11030 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:23:15.923988   11030 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:23:15.924016   11030 proxier.go:350] userspace syncProxyRules took 86.071167ms\nI0218 14:23:15.924026   11030 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:23:45.924218   11030 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:23:45.955567   11030 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-pk5t9\nI0218 14:23:46.142777   11030 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:23:46.223895   11030 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:23:46.223922   11030 proxier.go:350] userspace syncProxyRules took 81.121253ms\nI0218 14:23:46.223932   11030 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:23:58.390755   11030 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0218 14:23:58.395072   11030 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nW0218 14:23:58.397609   11030 pod.go:274] CNI_ADD openshift-multus/multus-admission-controller-22lq9 failed: exit status 1\nI0218 14:23:58.405302   11030 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0218 14:23:58.407721   11030 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-22lq9\nI0218 14:23:58.459022   11030 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0218 14:23:58.462549   11030 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-22lq9\nF0218 14:24:00.179466   11030 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 18 14:24:01.823 E ns/openshift-multus pod/multus-zzzbn node/ip-10-0-133-8.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 18 14:24:03.000 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-598bfb56fb-pkn7d node/ip-10-0-133-37.us-west-1.compute.internal container=manager container exited with code 1 (Error): ecret=openshift-network-operator/installer-cloud-credentials\ntime="2020-02-18T14:18:22Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-02-18T14:18:22Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-02-18T14:18:22Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-02-18T14:18:22Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-02-18T14:18:22Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-02-18T14:18:22Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-02-18T14:18:22Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-02-18T14:18:22Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-02-18T14:18:22Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\ntime="2020-02-18T14:20:21Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-02-18T14:20:21Z" level=info msg="reconcile complete" controller=metrics elapsed=1.83304ms\ntime="2020-02-18T14:22:21Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-02-18T14:22:21Z" level=info msg="reconcile complete" controller=metrics elapsed=1.347721ms\ntime="2020-02-18T14:24:01Z" level=error msg="leader election lostunable to run the manager"\n
Feb 18 14:24:28.105 E ns/openshift-service-ca pod/service-serving-cert-signer-789458465f-875hz node/ip-10-0-133-37.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Feb 18 14:24:31.173 E ns/openshift-sdn pod/sdn-749jm node/ip-10-0-145-111.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 218 14:24:08.990168   14465 cmd.go:177] openshift-sdn network plugin ready\nI0218 14:24:10.007256   14465 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics to [10.129.0.48:2112]\nI0218 14:24:10.007442   14465 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/controller-manager-service: to [10.129.0.48:443]\nI0218 14:24:10.007502   14465 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:24:10.186163   14465 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:24:10.256746   14465 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:24:10.256775   14465 proxier.go:350] userspace syncProxyRules took 70.583677ms\nI0218 14:24:10.256788   14465 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:24:10.256803   14465 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:24:10.429067   14465 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:24:10.521924   14465 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:24:10.521957   14465 proxier.go:350] userspace syncProxyRules took 92.863683ms\nI0218 14:24:10.521972   14465 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:24:20.064592   14465 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.65:6443]\nI0218 14:24:20.064769   14465 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:24:20.311730   14465 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:24:20.398540   14465 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:24:20.398574   14465 proxier.go:350] userspace syncProxyRules took 86.8143ms\nI0218 14:24:20.398589   14465 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nF0218 14:24:30.300605   14465 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 18 14:24:36.194 E ns/openshift-multus pod/multus-admission-controller-lt7gh node/ip-10-0-145-111.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 18 14:24:50.426 E ns/openshift-multus pod/multus-7xw57 node/ip-10-0-145-111.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 18 14:25:00.219 E ns/openshift-sdn pod/sdn-fspdf node/ip-10-0-149-108.us-west-1.compute.internal container=sdn container exited with code 255 (Error): id proxy: syncProxyRules start\nI0218 14:24:36.123450   13604 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:24:36.195374   13604 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:24:36.195399   13604 proxier.go:350] userspace syncProxyRules took 71.924626ms\nI0218 14:24:36.195409   13604 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:24:57.090353   13604 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-upgrade-7064/service-test: to [10.129.2.20:80]\nI0218 14:24:57.090382   13604 roundrobin.go:218] Delete endpoint 10.131.0.13:80 for service "e2e-k8s-service-upgrade-7064/service-test:"\nI0218 14:24:57.090425   13604 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:24:57.249476   13604 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:24:57.316151   13604 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:24:57.316175   13604 proxier.go:350] userspace syncProxyRules took 66.67604ms\nI0218 14:24:57.316185   13604 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 14:24:57.423927   13604 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.63:6443 10.129.0.65:6443 10.130.0.67:6443]\nI0218 14:24:57.423959   13604 roundrobin.go:218] Delete endpoint 10.128.0.63:6443 for service "openshift-multus/multus-admission-controller:"\nI0218 14:24:57.424026   13604 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 14:24:57.592589   13604 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 14:24:57.659383   13604 proxier.go:371] userspace proxy: processing 0 service events\nI0218 14:24:57.659406   13604 proxier.go:350] userspace syncProxyRules took 66.793955ms\nI0218 14:24:57.659416   13604 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nF0218 14:24:59.372148   13604 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 18 14:25:39.354 E ns/openshift-multus pod/multus-9sq8s node/ip-10-0-133-37.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 18 14:26:17.081 E ns/openshift-multus pod/multus-lbj9c node/ip-10-0-136-19.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 18 14:27:02.816 E ns/openshift-multus pod/multus-c7vgt node/ip-10-0-141-240.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 18 14:27:44.626 E ns/openshift-machine-config-operator pod/machine-config-operator-89dfc7577-wsj72 node/ip-10-0-133-8.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error):  ended with: too old resource version: 17058 (20684)\nW0218 14:18:32.903711       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRole ended with: too old resource version: 17058 (20684)\nW0218 14:18:32.940390       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 19854 (21524)\nW0218 14:18:32.951348       1 reflector.go:299] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.CustomResourceDefinition ended with: too old resource version: 18658 (20678)\nW0218 14:18:33.132263       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 18142 (22555)\nW0218 14:18:33.149539       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 17989 (24669)\nW0218 14:18:33.234826       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 17988 (21700)\nW0218 14:18:33.234964       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 17740 (20682)\nW0218 14:18:34.745780       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 17977 (24838)\nW0218 14:18:34.757213       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 18028 (24838)\nW0218 14:18:35.076409       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 18005 (24742)\n
Feb 18 14:29:39.533 E ns/openshift-machine-config-operator pod/machine-config-daemon-8vlxx node/ip-10-0-136-19.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:29:51.310 E ns/openshift-machine-config-operator pod/machine-config-daemon-ptkdl node/ip-10-0-145-111.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:30:08.207 E ns/openshift-machine-config-operator pod/machine-config-daemon-p8279 node/ip-10-0-141-240.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:30:16.673 E ns/openshift-machine-config-operator pod/machine-config-daemon-bpmmh node/ip-10-0-149-108.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:30:23.344 E ns/openshift-machine-config-operator pod/machine-config-daemon-h5lnz node/ip-10-0-133-37.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:30:32.211 E ns/openshift-machine-config-operator pod/machine-config-daemon-x8svh node/ip-10-0-133-8.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:30:45.419 E ns/openshift-machine-config-operator pod/machine-config-controller-5c8c587d69-b2p5m node/ip-10-0-133-37.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): ft/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 29438 (29695)\nI0218 14:24:25.667771       1 node_controller.go:435] Pool master: node ip-10-0-133-8.us-west-1.compute.internal is now reporting ready\nI0218 14:24:25.727770       1 node_controller.go:433] Pool master: node ip-10-0-145-111.us-west-1.compute.internal is now reporting unready: node ip-10-0-145-111.us-west-1.compute.internal is reporting NotReady=False\nI0218 14:24:45.747803       1 node_controller.go:435] Pool master: node ip-10-0-145-111.us-west-1.compute.internal is now reporting ready\nI0218 14:25:05.304188       1 node_controller.go:433] Pool worker: node ip-10-0-149-108.us-west-1.compute.internal is now reporting unready: node ip-10-0-149-108.us-west-1.compute.internal is reporting NotReady=False\nI0218 14:25:15.313163       1 node_controller.go:435] Pool worker: node ip-10-0-149-108.us-west-1.compute.internal is now reporting ready\nI0218 14:25:15.585763       1 node_controller.go:433] Pool master: node ip-10-0-133-37.us-west-1.compute.internal is now reporting unready: node ip-10-0-133-37.us-west-1.compute.internal is reporting NotReady=False\nI0218 14:25:55.613548       1 node_controller.go:435] Pool master: node ip-10-0-133-37.us-west-1.compute.internal is now reporting ready\nI0218 14:25:58.819042       1 node_controller.go:433] Pool worker: node ip-10-0-136-19.us-west-1.compute.internal is now reporting unready: node ip-10-0-136-19.us-west-1.compute.internal is reporting NotReady=False\nI0218 14:26:38.857293       1 node_controller.go:435] Pool worker: node ip-10-0-136-19.us-west-1.compute.internal is now reporting ready\nI0218 14:26:45.786968       1 node_controller.go:433] Pool worker: node ip-10-0-141-240.us-west-1.compute.internal is now reporting unready: node ip-10-0-141-240.us-west-1.compute.internal is reporting NotReady=False\nI0218 14:27:25.811262       1 node_controller.go:435] Pool worker: node ip-10-0-141-240.us-west-1.compute.internal is now reporting ready\n
Feb 18 14:32:25.755 E ns/openshift-machine-config-operator pod/machine-config-server-vv7b4 node/ip-10-0-145-111.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0218 13:56:24.029876       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0218 13:56:24.031000       1 api.go:51] Launching server on :22624\nI0218 13:56:24.031058       1 api.go:51] Launching server on :22623\nI0218 13:57:45.577336       1 api.go:97] Pool worker requested by 10.0.140.44:61397\nI0218 13:57:46.192949       1 api.go:97] Pool worker requested by 10.0.152.135:4567\n
Feb 18 14:32:28.761 E ns/openshift-machine-config-operator pod/machine-config-server-rpp2f node/ip-10-0-133-37.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0218 13:56:23.165830       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0218 13:56:23.166825       1 api.go:51] Launching server on :22624\nI0218 13:56:23.166869       1 api.go:51] Launching server on :22623\nI0218 13:57:43.726674       1 api.go:97] Pool worker requested by 10.0.140.44:36382\n
Feb 18 14:32:33.084 E ns/openshift-console pod/downloads-67fb554d88-9cvzc node/ip-10-0-149-108.us-west-1.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:33.166 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-108.us-west-1.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:33.166 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-108.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:33.166 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-108.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:33.166 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-108.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:33.166 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-108.us-west-1.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:33.166 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-108.us-west-1.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:33.166 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-108.us-west-1.compute.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:33.217 E ns/openshift-monitoring pod/prometheus-adapter-6bd4dddbd-z9gjt node/ip-10-0-149-108.us-west-1.compute.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:34.033 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6849599c4f-n9jzh node/ip-10-0-133-37.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:34.232 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-5479bd4498-zdglx node/ip-10-0-133-37.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:32:38.214 E ns/openshift-service-ca pod/apiservice-cabundle-injector-5496b6df6f-lrzgv node/ip-10-0-133-37.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Feb 18 14:32:39.152 E ns/openshift-operator-lifecycle-manager pod/packageserver-598cd9f8d6-swbw5 node/ip-10-0-133-8.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:33:12.167 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-19.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-18T14:32:56.917Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-18T14:32:56.921Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-18T14:32:56.922Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-18T14:32:56.923Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-18T14:32:56.923Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-18T14:32:56.923Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-18T14:32:56.923Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-18T14:32:56.923Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-18T14:32:56.923Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-18T14:32:56.923Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-18T14:32:56.923Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-18T14:32:56.923Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-18T14:32:56.923Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-18T14:32:56.923Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-18T14:32:56.924Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-18T14:32:56.924Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-18
Feb 18 14:33:25.902 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusK8SFailed: Failed to rollout the stack. Error: running task Updating Prometheus-k8s failed: waiting for Prometheus Route to become ready failed: waiting for RouteReady of prometheus-k8s: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Feb 18 14:34:56.951 E ns/openshift-monitoring pod/node-exporter-wdz9c node/ip-10-0-149-108.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 2-18T14:18:36Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:36Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 14:34:56.965 E ns/openshift-cluster-node-tuning-operator pod/tuned-9m8v2 node/ip-10-0-149-108.us-west-1.compute.internal container=tuned container exited with code 143 (Error): ar/lib/tuned/ocp-pod-labels.cfg\nI0218 14:29:55.267958    1233 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:29:55.380192    1233 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 14:30:17.652644    1233 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-bpmmh) labels changed node wide: true\nI0218 14:30:20.266508    1233 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:30:20.267975    1233 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:30:20.379157    1233 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 14:32:31.088529    1233 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-6259/dp-657fc4b57d-9cg7z) labels changed node wide: true\nI0218 14:32:35.266533    1233 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:32:35.267988    1233 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:32:35.377831    1233 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 14:32:41.525182    1233 openshift-tuned.go:550] Pod (openshift-console/downloads-67fb554d88-9cvzc) labels changed node wide: true\nI0218 14:32:45.266550    1233 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:32:45.268199    1233 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:32:45.380716    1233 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 14:32:53.818237    1233 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0218 14:32:53.821829    1233 openshift-tuned.go:881] Pod event watch channel closed.\nI0218 14:32:53.821850    1233 openshift-tuned.go:883] Increasing resyncPeriod to 112\n
Feb 18 14:34:56.995 E ns/openshift-multus pod/multus-r8m7b node/ip-10-0-149-108.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 14:34:57.058 E ns/openshift-machine-config-operator pod/machine-config-daemon-8fqrn node/ip-10-0-149-108.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:35:00.607 E ns/openshift-multus pod/multus-r8m7b node/ip-10-0-149-108.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:35:07.354 E ns/openshift-machine-config-operator pod/machine-config-daemon-8fqrn node/ip-10-0-149-108.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 14:35:08.905 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-133-37.us-west-1.compute.internal node/ip-10-0-133-37.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): ble: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 14:32:37.913308       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-6c95c674db-tshvb is bound successfully on node "ip-10-0-145-111.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 14:32:39.097671       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-55d7fc6d89-h7s8w is bound successfully on node "ip-10-0-133-8.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 14:32:39.676323       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-54c45cfc8b-p46f5: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0218 14:32:41.661035       1 scheduler.go:667] pod openshift-monitoring/prometheus-k8s-0 is bound successfully on node "ip-10-0-136-19.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 14:32:44.677507       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-54c45cfc8b-p46f5: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Feb 18 14:35:08.973 E ns/openshift-monitoring pod/node-exporter-l6gp9 node/ip-10-0-133-37.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 2-18T14:18:18Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:18Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 14:35:09.003 E ns/openshift-cluster-node-tuning-operator pod/tuned-8x7pq node/ip-10-0-133-37.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 0218 14:32:34.013536     707 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:32:34.280005     707 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 14:32:34.283772     707 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-5-ip-10-0-133-37.us-west-1.compute.internal) labels changed node wide: false\nI0218 14:32:34.666946     707 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-4-ip-10-0-133-37.us-west-1.compute.internal) labels changed node wide: false\nI0218 14:32:34.880391     707 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-5-ip-10-0-133-37.us-west-1.compute.internal) labels changed node wide: false\nI0218 14:32:35.149282     707 openshift-tuned.go:550] Pod (openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator-db4c5wqnq) labels changed node wide: true\nI0218 14:32:39.009641     707 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:32:39.011094     707 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:32:39.158210     707 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 14:32:42.171059     707 openshift-tuned.go:550] Pod (openshift-cluster-storage-operator/cluster-storage-operator-7946fbb7d7-vg2mx) labels changed node wide: true\nI0218 14:32:44.009590     707 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:32:44.010997     707 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:32:44.172947     707 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 14:32:52.184022     707 openshift-tuned.go:550] Pod (openshift-service-ca-operator/service-ca-operator-6b8c7458dd-zqzk2) labels changed node wide: true\n
Feb 18 14:35:09.028 E ns/openshift-sdn pod/sdn-controller-9tcc6 node/ip-10-0-133-37.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0218 14:23:03.232681       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 18 14:35:09.064 E ns/openshift-multus pod/multus-admission-controller-22lq9 node/ip-10-0-133-37.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 18 14:35:09.080 E ns/openshift-controller-manager pod/controller-manager-nxpg8 node/ip-10-0-133-37.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 18 14:35:09.135 E ns/openshift-multus pod/multus-xm82g node/ip-10-0-133-37.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 14:35:09.168 E ns/openshift-machine-config-operator pod/machine-config-daemon-plxk7 node/ip-10-0-133-37.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:35:09.187 E ns/openshift-machine-config-operator pod/machine-config-server-htc8l node/ip-10-0-133-37.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0218 14:32:33.943223       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0218 14:32:33.944371       1 api.go:51] Launching server on :22624\nI0218 14:32:33.944514       1 api.go:51] Launching server on :22623\n
Feb 18 14:35:12.705 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-37.us-west-1.compute.internal node/ip-10-0-133-37.us-west-1.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error): or resource "operators.coreos.com/v1, Resource=operatorgroups": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorgroups", couldn't start monitor for resource "tuned.openshift.io/v1, Resource=tuneds": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=tuneds"]\nI0218 14:20:03.128597       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0218 14:20:03.128638       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0218 14:20:03.128750       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0218 14:20:03.128782       1 controller_utils.go:1027] Waiting for caches to sync for cluster resource quota controller\nI0218 14:20:03.128811       1 resource_quota_monitor.go:301] QuotaMonitor running\nI0218 14:20:03.168189       1 policy_controller.go:144] Started "openshift.io/namespace-security-allocation"\nI0218 14:20:03.168372       1 controller_utils.go:1027] Waiting for caches to sync for namespace-security-allocation-controller controller\nI0218 14:20:03.205604       1 policy_controller.go:144] Started "openshift.io/resourcequota"\nI0218 14:20:03.205635       1 policy_controller.go:147] Started Origin Controllers\nI0218 14:20:03.206195       1 resource_quota_controller.go:276] Starting resource quota controller\nI0218 14:20:03.206305       1 controller_utils.go:1027] Waiting for caches to sync for resource quota controller\nI0218 14:20:03.314919       1 controller_utils.go:1034] Caches are synced for resource quota controller\nI0218 14:20:03.368832       1 controller_utils.go:1034] Caches are synced for namespace-security-allocation-controller controller\nI0218 14:20:04.029009       1 controller_utils.go:1034] Caches are synced for cluster resource quota controller\nW0218 14:32:21.378943       1 reflector.go:289] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: The resourceVersion for the provided watch is too old.\n
Feb 18 14:35:12.705 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-37.us-west-1.compute.internal node/ip-10-0-133-37.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:31:41.102419       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:31:41.102808       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:31:51.111698       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:31:51.112077       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:32:01.119404       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:32:01.120144       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:32:11.128371       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:32:11.129241       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:32:21.136638       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:32:21.137067       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:32:31.159201       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:32:31.159667       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:32:41.168987       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:32:41.169288       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:32:51.178169       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:32:51.178457       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Feb 18 14:35:12.705 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-37.us-west-1.compute.internal node/ip-10-0-133-37.us-west-1.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): 9:59.184621928 +0000 UTC))\nI0218 14:19:59.184706       1 tlsconfig.go:179] loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-02-18 13:37:59 +0000 UTC to 2020-02-19 13:37:59 +0000 UTC (now=2020-02-18 14:19:59.184690192 +0000 UTC))\nI0218 14:19:59.185072       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1582034132" (2020-02-18 13:55:43 +0000 UTC to 2022-02-17 13:55:44 +0000 UTC (now=2020-02-18 14:19:59.185051498 +0000 UTC))\nI0218 14:19:59.185386       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582035599" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582035597" (2020-02-18 13:19:57 +0000 UTC to 2021-02-17 13:19:57 +0000 UTC (now=2020-02-18 14:19:59.185361472 +0000 UTC))\nI0218 14:19:59.185494       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1582035599" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582035597" (2020-02-18 13:19:57 +0000 UTC to 2021-02-17 13:19:57 +0000 UTC (now=2020-02-18 14:19:59.185476184 +0000 UTC))\nI0218 14:19:59.185525       1 secure_serving.go:178] Serving securely on [::]:10257\nI0218 14:19:59.185558       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0218 14:19:59.185836       1 tlsconfig.go:241] Starting DynamicServingCertificateController\n
Feb 18 14:35:13.820 E ns/openshift-multus pod/multus-xm82g node/ip-10-0-133-37.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:35:13.929 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-37.us-west-1.compute.internal node/ip-10-0-133-37.us-west-1.compute.internal container=kube-apiserver-5 container exited with code 1 (Error): required revision has been compacted\nE0218 14:32:53.300542       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:32:53.300792       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:32:53.300980       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:32:53.301028       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:32:53.301117       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:32:53.301178       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:32:53.301182       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:32:53.301288       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:32:53.301535       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:32:53.301707       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:32:53.351810       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0218 14:32:53.516356       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-133-37.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0218 14:32:53.516916       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0218 14:32:53.606817       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nW0218 14:32:53.611897       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.133.8 10.0.145.111]\n
Feb 18 14:35:13.929 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-37.us-west-1.compute.internal node/ip-10-0-133-37.us-west-1.compute.internal container=kube-apiserver-insecure-readyz-5 container exited with code 2 (Error): I0218 14:14:47.924254       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 18 14:35:13.929 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-37.us-west-1.compute.internal node/ip-10-0-133-37.us-west-1.compute.internal container=kube-apiserver-cert-syncer-5 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 14:24:54.003215       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:24:54.003539       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 14:24:56.198622       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:24:56.198942       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 18 14:35:15.305 E ns/openshift-monitoring pod/node-exporter-l6gp9 node/ip-10-0-133-37.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:35:18.881 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 18 14:35:20.728 E ns/openshift-machine-config-operator pod/machine-config-daemon-plxk7 node/ip-10-0-133-37.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 14:35:26.866 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6849599c4f-fslhh node/ip-10-0-145-111.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): .751833       1 secure_serving.go:123] Serving securely on [::]:8443\nI0218 14:33:59.988400       1 leaderelection.go:251] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock\nI0218 14:33:59.988621       1 event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"3b76bf0d-7ca7-4b7d-b9ce-fccc883c2983", APIVersion:"v1", ResourceVersion:"34599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 384d46ca-3be0-4ada-8260-19cc59733016 became leader\nI0218 14:34:00.001023       1 logging_controller.go:82] Starting LogLevelController\nI0218 14:34:00.007246       1 workload_controller.go:158] Starting OpenShiftAPIServerOperator\nI0218 14:34:00.007516       1 config_observer_controller.go:148] Starting ConfigObserver\nI0218 14:34:00.007762       1 status_controller.go:198] Starting StatusSyncer-openshift-apiserver\nI0218 14:34:00.008002       1 condition_controller.go:191] Starting EncryptionConditionController\nI0218 14:34:00.008247       1 finalizer_controller.go:119] Starting FinalizerController\nI0218 14:34:00.008366       1 resourcesync_controller.go:217] Starting ResourceSyncController\nI0218 14:34:00.008413       1 revision_controller.go:336] Starting RevisionController\nI0218 14:34:00.009682       1 prune_controller.go:221] Starting PruneController\nI0218 14:34:00.009704       1 unsupportedconfigoverrides_controller.go:151] Starting UnsupportedConfigOverridesController\nI0218 14:34:00.009723       1 migration_controller.go:316] Starting EncryptionMigrationController\nI0218 14:34:00.009744       1 prune_controller.go:193] Starting EncryptionPruneController\nI0218 14:34:00.009769       1 key_controller.go:352] Starting EncryptionKeyController\nI0218 14:34:00.009852       1 state_controller.go:160] Starting EncryptionStateController\nI0218 14:35:25.810158       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 14:35:25.810220       1 leaderelection.go:66] leaderelection lost\n
Feb 18 14:35:29.220 E ns/openshift-console pod/console-b65cbcf7f-7kvnh node/ip-10-0-145-111.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/02/18 14:21:39 cmd/main: cookies are secure!\n2020/02/18 14:21:39 cmd/main: Binding to [::]:8443...\n2020/02/18 14:21:39 cmd/main: using TLS\n
Feb 18 14:35:30.470 E ns/openshift-insights pod/insights-operator-f79cc795b-rsfgs node/ip-10-0-145-111.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:35:33.555 E ns/openshift-machine-config-operator pod/machine-config-controller-77857447cd-68ksb node/ip-10-0-145-111.us-west-1.compute.internal container=machine-config-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:35:34.374 E ns/openshift-service-ca-operator pod/service-ca-operator-6b8c7458dd-hb2jv node/ip-10-0-145-111.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:35:43.401 E ns/openshift-operator-lifecycle-manager pod/packageserver-55d7fc6d89-7qstw node/ip-10-0-133-37.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:36:12.373 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:36:42.373 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:37:27.373 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:37:27.896 E ns/openshift-operator-lifecycle-manager pod/packageserver-55d7fc6d89-h7s8w node/ip-10-0-133-8.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:37:47.153 E ns/openshift-monitoring pod/prometheus-adapter-6bd4dddbd-mswfk node/ip-10-0-136-19.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0218 14:18:47.275561       1 adapter.go:93] successfully using in-cluster auth\nI0218 14:18:48.045326       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 18 14:37:47.250 E ns/openshift-monitoring pod/telemeter-client-5496869b4d-pxcxj node/ip-10-0-136-19.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Feb 18 14:37:47.383 E ns/openshift-marketplace pod/community-operators-554bd5d87c-nm4qg node/ip-10-0-136-19.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Feb 18 14:37:47.408 E ns/openshift-ingress pod/router-default-598949567d-56sbj node/ip-10-0-136-19.us-west-1.compute.internal container=router container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:37:48.242 E ns/openshift-monitoring pod/openshift-state-metrics-6b977fcb-92w46 node/ip-10-0-136-19.us-west-1.compute.internal container=openshift-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:37:48.242 E ns/openshift-monitoring pod/openshift-state-metrics-6b977fcb-92w46 node/ip-10-0-136-19.us-west-1.compute.internal container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:37:48.242 E ns/openshift-monitoring pod/openshift-state-metrics-6b977fcb-92w46 node/ip-10-0-136-19.us-west-1.compute.internal container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:37:48.286 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-136-19.us-west-1.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:37:48.286 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-136-19.us-west-1.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:37:48.286 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-136-19.us-west-1.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:38:06.573 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-145-111.us-west-1.compute.internal node/ip-10-0-145-111.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): 250>|StorageEphemeral<115455434152>.".\nI0218 14:35:33.457892       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-7c576587b7-hxvs9 is bound successfully on node "ip-10-0-133-8.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 14:35:43.332556       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-7c576587b7-qnpjf is bound successfully on node "ip-10-0-133-37.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 14:35:45.501344       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-54c45cfc8b-krx8d: no fit: 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 3 node(s) didn't match node selector.; waiting\nE0218 14:35:45.520202       1 factory.go:585] pod is already present in the activeQ\nI0218 14:35:45.530585       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-54c45cfc8b-krx8d: no fit: 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 3 node(s) didn't match node selector.; waiting\nI0218 14:35:47.398842       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-54c45cfc8b-krx8d: no fit: 0/6 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 3 node(s) didn't match node selector.; waiting\n
Feb 18 14:38:06.717 E ns/openshift-monitoring pod/node-exporter-5k5jv node/ip-10-0-145-111.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 2-18T14:18:57Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:57Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 14:38:06.732 E ns/openshift-cluster-node-tuning-operator pod/tuned-z4558 node/ip-10-0-145-111.us-west-1.compute.internal container=tuned container exited with code 143 (Error): nded profile...\nI0218 14:35:33.542528     926 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 14:35:33.543561     926 openshift-tuned.go:550] Pod (openshift-cluster-storage-operator/cluster-storage-operator-7946fbb7d7-zszbz) labels changed node wide: true\nI0218 14:35:37.863922     926 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:35:37.865904     926 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:35:38.015329     926 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 14:35:42.573099     926 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-77cddfdcd-bhpd6) labels changed node wide: true\nI0218 14:35:42.864002     926 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:35:42.866148     926 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:35:43.018437     926 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 14:35:43.018522     926 openshift-tuned.go:550] Pod (openshift-service-ca/configmap-cabundle-injector-676788dbd5-7hrgt) labels changed node wide: true\nI0218 14:35:47.863913     926 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:35:47.865378     926 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:35:47.985053     926 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 14:35:48.526034     926 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-145-111.us-west-1.compute.internal) labels changed node wide: true\nI0218 14:35:48.779801     926 openshift-tuned.go:137] Received signal: terminated\nI0218 14:35:48.779847     926 openshift-tuned.go:304] Sending TERM to PID 1458\n
Feb 18 14:38:06.749 E ns/openshift-controller-manager pod/controller-manager-5xkqm node/ip-10-0-145-111.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 18 14:38:06.765 E ns/openshift-sdn pod/sdn-controller-gplrs node/ip-10-0-145-111.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0218 14:23:16.140576       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 18 14:38:06.817 E ns/openshift-sdn pod/ovs-shwmx node/ip-10-0-145-111.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): >unix#825: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:35:32.640Z|00258|bridge|INFO|bridge br0: deleted interface veth8cc7be93 on port 29\n2020-02-18T14:35:32.368Z|00038|jsonrpc|WARN|unix#688: send error: Broken pipe\n2020-02-18T14:35:32.368Z|00039|reconnect|WARN|unix#688: connection dropped (Broken pipe)\n2020-02-18T14:35:32.378Z|00040|reconnect|WARN|unix#689: connection dropped (Broken pipe)\n2020-02-18T14:35:32.722Z|00259|connmgr|INFO|br0<->unix#828: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:35:32.777Z|00260|connmgr|INFO|br0<->unix#831: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:35:32.865Z|00261|bridge|INFO|bridge br0: deleted interface veth56c08b74 on port 30\n2020-02-18T14:35:32.968Z|00262|connmgr|INFO|br0<->unix#834: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:35:33.087Z|00263|connmgr|INFO|br0<->unix#838: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:35:33.134Z|00264|bridge|INFO|bridge br0: deleted interface veth4c240bfc on port 32\n2020-02-18T14:35:33.201Z|00265|connmgr|INFO|br0<->unix#842: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:35:33.265Z|00266|connmgr|INFO|br0<->unix#845: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:35:33.371Z|00267|bridge|INFO|bridge br0: deleted interface vetha0291bc3 on port 14\n2020-02-18T14:35:33.483Z|00268|connmgr|INFO|br0<->unix#848: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:35:33.573Z|00269|connmgr|INFO|br0<->unix#851: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:35:33.619Z|00270|bridge|INFO|bridge br0: deleted interface vethec972ed6 on port 26\n2020-02-18T14:35:32.787Z|00041|reconnect|WARN|unix#698: connection dropped (Broken pipe)\n2020-02-18T14:35:32.797Z|00042|reconnect|WARN|unix#699: connection dropped (Broken pipe)\n2020-02-18T14:35:32.981Z|00043|reconnect|WARN|unix#702: connection dropped (Broken pipe)\n2020-02-18T14:35:33.103Z|00044|reconnect|WARN|unix#705: connection dropped (Broken pipe)\n2020-02-18T14:35:33.122Z|00045|reconnect|WARN|unix#707: connection dropped (Broken pipe)\nTerminated\n
Feb 18 14:38:06.841 E ns/openshift-multus pod/multus-admission-controller-xhghm node/ip-10-0-145-111.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 18 14:38:06.871 E ns/openshift-multus pod/multus-6kw5q node/ip-10-0-145-111.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 14:38:06.945 E ns/openshift-machine-config-operator pod/machine-config-daemon-bkzbv node/ip-10-0-145-111.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:38:06.982 E ns/openshift-machine-config-operator pod/machine-config-server-rs9gm node/ip-10-0-145-111.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0218 14:32:27.642053       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0218 14:32:27.642987       1 api.go:51] Launching server on :22624\nI0218 14:32:27.643054       1 api.go:51] Launching server on :22623\n
Feb 18 14:38:10.894 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-111.us-west-1.compute.internal node/ip-10-0-145-111.us-west-1.compute.internal container=kube-apiserver-5 container exited with code 1 (Error): red revision has been compacted\nE0218 14:35:48.457722       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.458123       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.458232       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.458247       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.463053       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.583193       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.583255       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.583427       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.583431       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.583469       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.583202       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.583624       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:35:48.583660       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0218 14:35:48.982588       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-145-111.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0218 14:35:48.982801       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Feb 18 14:38:10.894 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-111.us-west-1.compute.internal node/ip-10-0-145-111.us-west-1.compute.internal container=kube-apiserver-insecure-readyz-5 container exited with code 2 (Error): I0218 14:18:38.120459       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 18 14:38:10.894 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-111.us-west-1.compute.internal node/ip-10-0-145-111.us-west-1.compute.internal container=kube-apiserver-cert-syncer-5 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 14:28:43.080924       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:28:43.081343       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 14:28:43.290064       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:28:43.290412       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 18 14:38:12.145 E ns/openshift-sdn pod/sdn-749jm node/ip-10-0-145-111.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:38:12.212 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-111.us-west-1.compute.internal node/ip-10-0-145-111.us-west-1.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error): enshift.io/v1, Resource=credentialsrequests": unable to monitor quota for resource "cloudcredential.openshift.io/v1, Resource=credentialsrequests", couldn't start monitor for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers": unable to monitor quota for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheuses": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=subscriptions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=subscriptions", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=podmonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=podmonitors", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinehealthchecks": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinehealthchecks"]\nI0218 14:34:07.730473       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0218 14:34:07.730486       1 policy_controller.go:147] Started Origin Controllers\nI0218 14:34:07.730496       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0218 14:34:07.730872       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0218 14:34:07.730902       1 controller_utils.go:1027] Waiting for caches to sync for cluster resource quota controller\nI0218 14:34:07.735761       1 resource_quota_monitor.go:301] QuotaMonitor running\nI0218 14:34:07.771106       1 controller_utils.go:1034] Caches are synced for namespace-security-allocation-controller controller\nI0218 14:34:07.811000       1 controller_utils.go:1034] Caches are synced for resource quota controller\nI0218 14:34:08.531159       1 controller_utils.go:1034] Caches are synced for cluster resource quota controller\n
Feb 18 14:38:12.212 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-111.us-west-1.compute.internal node/ip-10-0-145-111.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error): -ca-bundle true}]\nI0218 14:34:55.092662       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:35:05.105117       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:35:05.105472       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:35:15.116366       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:35:15.116721       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:35:25.124499       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:35:25.124959       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:35:35.135472       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:35:35.135912       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:35:45.152128       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:35:45.153071       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nE0218 14:35:49.078377       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=34208&timeout=7m3s&timeoutSeconds=423&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0218 14:35:49.078497       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=36148&timeout=9m55s&timeoutSeconds=595&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Feb 18 14:38:12.212 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-111.us-west-1.compute.internal node/ip-10-0-145-111.us-west-1.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): ageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\nI0218 14:35:43.373639       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\nI0218 14:35:45.485602       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-54c45cfc8b, need 3, creating 1\nI0218 14:35:45.501900       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-54c45cfc8b", UID:"e3160912-f5c9-4bec-a783-bb115802dccd", APIVersion:"apps/v1", ResourceVersion:"36122", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-54c45cfc8b-krx8d\nI0218 14:35:45.541889       1 deployment_controller.go:484] Error syncing deployment openshift-machine-config-operator/etcd-quorum-guard: Operation cannot be fulfilled on deployments.apps "etcd-quorum-guard": the object has been modified; please apply your changes to the latest version and try again\nI0218 14:35:48.541848       1 endpoints_controller.go:340] Error syncing endpoints for service "openshift-etcd/etcd", retrying. Error: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0218 14:35:48.541928       1 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openshift-etcd", Name:"etcd", UID:"7d9232c7-7f6e-4f52-8a1c-01d97a7dd896", APIVersion:"v1", ResourceVersion:"36218", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\n
Feb 18 14:38:12.297 E ns/openshift-multus pod/multus-6kw5q node/ip-10-0-145-111.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:38:15.534 E ns/openshift-multus pod/multus-6kw5q node/ip-10-0-145-111.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:38:19.040 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-149-108.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-18T14:38:17.110Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-18T14:38:17.116Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-18T14:38:17.117Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-18T14:38:17.118Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-18T14:38:17.118Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-18T14:38:17.118Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-18T14:38:17.118Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-18T14:38:17.118Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-18T14:38:17.118Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-18T14:38:17.118Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-18T14:38:17.118Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-18T14:38:17.118Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-18T14:38:17.118Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-18T14:38:17.118Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-18T14:38:17.121Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-18T14:38:17.121Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-18
Feb 18 14:38:19.570 E ns/openshift-machine-config-operator pod/machine-config-daemon-bkzbv node/ip-10-0-145-111.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 14:38:37.488 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7c59d48cc5-xn5hw node/ip-10-0-133-8.us-west-1.compute.internal container=operator container exited with code 255 (Error): alog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0218 14:37:44.513484       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0218 14:37:44.513518       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0218 14:37:44.515090       1 httplog.go:90] GET /metrics: (1.755733ms) 200 [Prometheus/2.14.0 10.129.2.38:34480]\nI0218 14:37:45.070487       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 27 items received\nI0218 14:37:49.825152       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0218 14:37:59.833259       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0218 14:38:05.401890       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0218 14:38:05.401918       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0218 14:38:05.403308       1 httplog.go:90] GET /metrics: (5.9386ms) 200 [Prometheus/2.14.0 10.128.2.17:51146]\nI0218 14:38:09.846897       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0218 14:38:19.855206       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0218 14:38:29.863824       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0218 14:38:34.195576       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 14:38:34.196768       1 leaderelection.go:66] leaderelection lost\n
Feb 18 14:38:39.009 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-5479bd4498-4k5f8 node/ip-10-0-133-8.us-west-1.compute.internal container=operator container exited with code 255 (Error): metheus/2.14.0 10.129.2.38:36262]\nI0218 14:37:45.067767       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 27 items received\nI0218 14:37:47.828937       1 request.go:538] Throttling request took 152.219979ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0218 14:37:48.028029       1 request.go:538] Throttling request took 196.076687ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0218 14:37:56.032693       1 httplog.go:90] GET /metrics: (7.842429ms) 200 [Prometheus/2.14.0 10.128.2.17:33240]\nI0218 14:38:07.825600       1 request.go:538] Throttling request took 170.648617ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0218 14:38:08.025668       1 request.go:538] Throttling request took 197.753734ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0218 14:38:26.033669       1 httplog.go:90] GET /metrics: (8.819674ms) 200 [Prometheus/2.14.0 10.128.2.17:33240]\nI0218 14:38:27.825626       1 request.go:538] Throttling request took 172.062258ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0218 14:38:28.025614       1 request.go:538] Throttling request took 197.124885ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0218 14:38:32.113574       1 httplog.go:90] GET /metrics: (1.724946ms) 200 [Prometheus/2.14.0 10.131.0.19:50140]\nI0218 14:38:35.557662       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 14:38:35.557712       1 leaderelection.go:66] leaderelection lost\n
Feb 18 14:38:39.308 E ns/openshift-console-operator pod/console-operator-66d6d9bb7f-bqs77 node/ip-10-0-133-8.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): 28\nE0218 14:36:19.177908       1 reflector.go:280] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: Failed to watch *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io)\nE0218 14:36:20.256291       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-2020-02-18-133328\nE0218 14:36:23.258516       1 reflector.go:280] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: Failed to watch *v1.OAuthClient: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io)\nE0218 14:36:24.315143       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-2020-02-18-133328\nI0218 14:37:26.751193       1 status_controller.go:165] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-18T14:00:57Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-18T14:21:40Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-18T14:37:26Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-18T14:00:58Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0218 14:37:26.763187       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"41777fa7-e530-4f33-8eb3-60dc52a3994a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from False to True ("")\nE0218 14:37:27.770189       1 controller.go:280] clidownloads-sync-work-queue-key failed with : the server is currently unable to handle the request (get routes.route.openshift.io downloads)\nI0218 14:38:35.244154       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 14:38:35.244315       1 leaderelection.go:66] leaderelection lost\n
Feb 18 14:38:39.415 E ns/openshift-console pod/console-b65cbcf7f-cm4g9 node/ip-10-0-133-8.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/02/18 14:21:19 cmd/main: cookies are secure!\n2020/02/18 14:21:20 cmd/main: Binding to [::]:8443...\n2020/02/18 14:21:20 cmd/main: using TLS\n
Feb 18 14:38:40.770 E ns/openshift-machine-config-operator pod/machine-config-controller-77857447cd-79gc2 node/ip-10-0-133-8.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): 19.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0218 14:37:44.555219       1 node_controller.go:433] Pool worker: node ip-10-0-136-19.us-west-1.compute.internal is now reporting unready: node ip-10-0-136-19.us-west-1.compute.internal is reporting Unschedulable\nW0218 14:37:45.241450       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 36282 (37029)\nI0218 14:38:06.305474       1 node_controller.go:433] Pool master: node ip-10-0-145-111.us-west-1.compute.internal is now reporting unready: node ip-10-0-145-111.us-west-1.compute.internal is reporting NotReady=False\nI0218 14:38:26.689846       1 node_controller.go:433] Pool master: node ip-10-0-145-111.us-west-1.compute.internal is now reporting unready: node ip-10-0-145-111.us-west-1.compute.internal is reporting Unschedulable\nI0218 14:38:27.756597       1 node_controller.go:442] Pool master: node ip-10-0-145-111.us-west-1.compute.internal has completed update to rendered-master-fa1e6132d7b673bee551b04b5a720721\nI0218 14:38:27.767757       1 node_controller.go:435] Pool master: node ip-10-0-145-111.us-west-1.compute.internal is now reporting ready\nI0218 14:38:31.690288       1 node_controller.go:758] Setting node ip-10-0-133-8.us-west-1.compute.internal to desired config rendered-master-fa1e6132d7b673bee551b04b5a720721\nI0218 14:38:31.719942       1 node_controller.go:452] Pool master: node ip-10-0-133-8.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-fa1e6132d7b673bee551b04b5a720721\nI0218 14:38:32.731703       1 node_controller.go:452] Pool master: node ip-10-0-133-8.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0218 14:38:32.760677       1 node_controller.go:433] Pool master: node ip-10-0-133-8.us-west-1.compute.internal is now reporting unready: node ip-10-0-133-8.us-west-1.compute.internal is reporting Unschedulable\n
Feb 18 14:38:41.442 E ns/openshift-authentication-operator pod/authentication-operator-69bdc6b45-k7dlg node/ip-10-0-133-8.us-west-1.compute.internal container=operator container exited with code 255 (Error): dle the request (post oauthclients.oauth.openshift.io)","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-18T14:35:30Z","message":"Progressing: not all deployment replicas are ready","reason":"ProgressingOAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-02-18T14:08:32Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-18T14:00:58Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0218 14:37:18.575626       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"3f4388fb-d33e-478f-af94-52ce79e6ebc2", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteStatusDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)" to "OperatorSyncDegraded: the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)"\nE0218 14:37:21.626587       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nE0218 14:37:27.770747       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nE0218 14:38:00.090297       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nI0218 14:38:37.195524       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 14:38:37.195673       1 leaderelection.go:66] leaderelection lost\nI0218 14:38:37.201500       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\n
Feb 18 14:38:41.893 E ns/openshift-monitoring pod/prometheus-operator-7b4479fb84-2hbpr node/ip-10-0-133-8.us-west-1.compute.internal container=prometheus-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:38:41.928 E ns/openshift-service-ca-operator pod/service-ca-operator-6b8c7458dd-62g8m node/ip-10-0-133-8.us-west-1.compute.internal container=operator container exited with code 255 (Error): 
Feb 18 14:38:41.976 E ns/openshift-monitoring pod/thanos-querier-67d765c5c7-4mdxd node/ip-10-0-133-8.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/18 14:18:35 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/18 14:18:35 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 14:18:35 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 14:18:35 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/18 14:18:35 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/18 14:18:35 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/18 14:18:35 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/18 14:18:35 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/18 14:18:35 http.go:96: HTTPS: listening on [::]:9091\n
Feb 18 14:38:42.057 E ns/openshift-cluster-machine-approver pod/machine-approver-5b86675bd9-nmf2t node/ip-10-0-133-8.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0218 14:18:02.680863       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0218 14:18:02.680933       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0218 14:18:02.681668       1 main.go:236] Starting Machine Approver\nI0218 14:18:02.782193       1 main.go:146] CSR csr-wwqlk added\nI0218 14:18:02.782230       1 main.go:149] CSR csr-wwqlk is already approved\nI0218 14:18:02.782247       1 main.go:146] CSR csr-x4xhv added\nI0218 14:18:02.782256       1 main.go:149] CSR csr-x4xhv is already approved\nI0218 14:18:02.782271       1 main.go:146] CSR csr-xbc5j added\nI0218 14:18:02.782280       1 main.go:149] CSR csr-xbc5j is already approved\nI0218 14:18:02.782306       1 main.go:146] CSR csr-62bwr added\nI0218 14:18:02.782315       1 main.go:149] CSR csr-62bwr is already approved\nI0218 14:18:02.782338       1 main.go:146] CSR csr-8xssn added\nI0218 14:18:02.782348       1 main.go:149] CSR csr-8xssn is already approved\nI0218 14:18:02.782359       1 main.go:146] CSR csr-9jf7b added\nI0218 14:18:02.782368       1 main.go:149] CSR csr-9jf7b is already approved\nI0218 14:18:02.782379       1 main.go:146] CSR csr-dhnhm added\nI0218 14:18:02.782387       1 main.go:149] CSR csr-dhnhm is already approved\nI0218 14:18:02.782402       1 main.go:146] CSR csr-jfj8s added\nI0218 14:18:02.782410       1 main.go:149] CSR csr-jfj8s is already approved\nI0218 14:18:02.782423       1 main.go:146] CSR csr-tvpqg added\nI0218 14:18:02.782433       1 main.go:149] CSR csr-tvpqg is already approved\nI0218 14:18:02.782449       1 main.go:146] CSR csr-5k22p added\nI0218 14:18:02.782470       1 main.go:149] CSR csr-5k22p is already approved\nI0218 14:18:02.782483       1 main.go:146] CSR csr-6n9cm added\nI0218 14:18:02.782492       1 main.go:149] CSR csr-6n9cm is already approved\nI0218 14:18:02.782505       1 main.go:146] CSR csr-cfwqx added\nI0218 14:18:02.782525       1 main.go:149] CSR csr-cfwqx is already approved\n
Feb 18 14:38:43.166 E ns/openshift-service-ca pod/service-serving-cert-signer-789458465f-k8zg9 node/ip-10-0-133-8.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Feb 18 14:38:43.260 E ns/openshift-service-ca pod/configmap-cabundle-injector-676788dbd5-hpzdf node/ip-10-0-133-8.us-west-1.compute.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Feb 18 14:38:54.444 E ns/openshift-operator-lifecycle-manager pod/packageserver-7c576587b7-dks29 node/ip-10-0-145-111.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:38:57.570 E kube-apiserver Kube API started failing: Get https://api.ci-op-16llxmvs-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Feb 18 14:39:07.993 E ns/openshift-monitoring pod/prometheus-operator-7b4479fb84-b57h9 node/ip-10-0-145-111.us-west-1.compute.internal container=prometheus-operator container exited with code 1 (Error): ts=2020-02-18T14:39:05.777108919Z caller=main.go:199 msg="Starting Prometheus Operator version '0.34.0'."\nts=2020-02-18T14:39:05.828840165Z caller=main.go:96 msg="Staring insecure server on :8080"\nts=2020-02-18T14:39:05.832163321Z caller=main.go:315 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://172.30.0.1:443/version?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 18 14:39:08.083 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-37.us-west-1.compute.internal node/ip-10-0-133-37.us-west-1.compute.internal container=kube-controller-manager-7 container exited with code 255 (Error): stance:""}': 'etcdserver: request timed out' (will not retry!)\nI0218 14:39:06.104095       1 service_controller.go:695] Detected change in list of current cluster nodes. New node set: map[ip-10-0-133-37.us-west-1.compute.internal:{} ip-10-0-141-240.us-west-1.compute.internal:{} ip-10-0-145-111.us-west-1.compute.internal:{} ip-10-0-149-108.us-west-1.compute.internal:{}]\nI0218 14:39:06.157760       1 aws_loadbalancer.go:1375] Instances added to load-balancer ab3ddcea0e9ac41a7a104c6ff603f454\nI0218 14:39:06.177569       1 aws_loadbalancer.go:1386] Instances removed from load-balancer ab3ddcea0e9ac41a7a104c6ff603f454\nI0218 14:39:06.530151       1 event.go:255] Event(v1.ObjectReference{Kind:"Service", Namespace:"e2e-k8s-service-upgrade-7064", Name:"service-test", UID:"b3ddcea0-e9ac-41a7-a104-c6ff603f4542", APIVersion:"v1", ResourceVersion:"17695", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\nI0218 14:39:06.565029       1 aws_loadbalancer.go:1375] Instances added to load-balancer ae2b8b0dabbf043a7bdee9afac0081f6\nI0218 14:39:06.581754       1 aws_loadbalancer.go:1386] Instances removed from load-balancer ae2b8b0dabbf043a7bdee9afac0081f6\nI0218 14:39:07.062068       1 service_controller.go:703] Successfully updated 2 out of 2 load balancers to direct traffic to the updated set of nodes\nI0218 14:39:07.062374       1 event.go:255] Event(v1.ObjectReference{Kind:"Service", Namespace:"openshift-ingress", Name:"router-default", UID:"e2b8b0da-bbf0-43a7-bdee-9afac0081f62", APIVersion:"v1", ResourceVersion:"11044", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\nE0218 14:39:07.062846       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: etcdserver: request timed out\nI0218 14:39:07.071817       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0218 14:39:07.071967       1 controllermanager.go:291] leaderelection lost\n
Feb 18 14:39:08.243 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-598bfb56fb-whghl node/ip-10-0-133-37.us-west-1.compute.internal container=manager container exited with code 1 (Error): -02-18T14:36:04Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-02-18T14:36:04Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-02-18T14:36:04Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-02-18T14:36:10Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\ntime="2020-02-18T14:38:04Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-02-18T14:38:04Z" level=info msg="reconcile complete" controller=metrics elapsed=2.063085ms\nE0218 14:39:06.826490       1 leaderelection.go:306] error retrieving resource lock openshift-cloud-credential-operator/cloud-credential-operator-leader: etcdserver: request timed out\nE0218 14:39:07.429477       1 event.go:247] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'cloud-credential-operator-598bfb56fb-whghl_f2cea8b8-525b-11ea-887c-0a580a81000e stopped leading'\ntime="2020-02-18T14:39:07Z" level=error msg="leader election lostunable to run the manager"\n
Feb 18 14:39:11.121 E ns/openshift-operator-lifecycle-manager pod/packageserver-55cc4c674d-2ktj6 node/ip-10-0-133-37.us-west-1.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-02-18T14:39:10Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 18 14:39:27.373 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:39:57.373 - 29s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 14:40:01.045 E clusteroperator/authentication changed Degraded to True: OperatorSyncDegradedError: OperatorSyncDegraded: the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)
Feb 18 14:40:13.288 E ns/openshift-cluster-node-tuning-operator pod/tuned-nw2jf node/ip-10-0-136-19.us-west-1.compute.internal container=tuned container exited with code 143 (Error): n-0) labels changed node wide: true\nI0218 14:32:39.089776     895 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:32:39.091545     895 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:32:39.209536     895 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 14:32:41.667848     895 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0218 14:32:44.089758     895 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:32:44.091875     895 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:32:44.220165     895 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 14:35:49.072556     895 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0218 14:35:49.076100     895 openshift-tuned.go:881] Pod event watch channel closed.\nI0218 14:35:49.076150     895 openshift-tuned.go:883] Increasing resyncPeriod to 132\nI0218 14:38:01.076348     895 openshift-tuned.go:209] Extracting tuned profiles\nI0218 14:38:01.078805     895 openshift-tuned.go:739] Resync period to pull node/pod labels: 132 [s]\nI0218 14:38:01.097943     895 openshift-tuned.go:550] Pod (openshift-dns/dns-default-q2bxw) labels changed node wide: true\nI0218 14:38:06.095245     895 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:38:06.096936     895 openshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0218 14:38:06.098375     895 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:38:06.217834     895 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 14:38:17.219248     895 openshift-tuned.go:550] Pod (e2e-k8s-service-upgrade-7064/service-test-dmwh9) labels changed node wide: true\n
Feb 18 14:40:13.449 E ns/openshift-monitoring pod/node-exporter-4mvsp node/ip-10-0-136-19.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 2-18T14:19:50Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T14:19:50Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 14:40:13.516 E ns/openshift-sdn pod/ovs-kvjp8 node/ip-10-0-136-19.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): ge br0: deleted interface vetha96b3ac6 on port 3\n2020-02-18T14:37:46.965Z|00187|connmgr|INFO|br0<->unix#844: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:37:46.519Z|00019|jsonrpc|WARN|Dropped 2 log messages in last 887 seconds (most recently, 887 seconds ago) due to excessive rate\n2020-02-18T14:37:46.519Z|00020|jsonrpc|WARN|unix#742: send error: Broken pipe\n2020-02-18T14:37:46.519Z|00021|reconnect|WARN|unix#742: connection dropped (Broken pipe)\n2020-02-18T14:37:47.030Z|00188|connmgr|INFO|br0<->unix#847: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:37:47.056Z|00189|bridge|INFO|bridge br0: deleted interface veth5ca24095 on port 6\n2020-02-18T14:37:47.099Z|00190|connmgr|INFO|br0<->unix#850: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:37:47.157Z|00191|connmgr|INFO|br0<->unix#853: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:37:47.186Z|00192|bridge|INFO|bridge br0: deleted interface veth0a9c163b on port 21\n2020-02-18T14:37:47.235Z|00193|connmgr|INFO|br0<->unix#856: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:37:47.281Z|00194|connmgr|INFO|br0<->unix#859: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:37:47.310Z|00195|bridge|INFO|bridge br0: deleted interface veth9ad603ee on port 20\n2020-02-18T14:37:47.355Z|00196|connmgr|INFO|br0<->unix#862: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:37:47.399Z|00197|connmgr|INFO|br0<->unix#865: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:37:47.430Z|00198|bridge|INFO|bridge br0: deleted interface vethc5182155 on port 16\n2020-02-18T14:37:47.423Z|00022|jsonrpc|WARN|unix#778: receive error: Connection reset by peer\n2020-02-18T14:37:47.423Z|00023|reconnect|WARN|unix#778: connection dropped (Connection reset by peer)\n2020-02-18T14:38:15.748Z|00199|connmgr|INFO|br0<->unix#889: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:38:15.775Z|00200|connmgr|INFO|br0<->unix#892: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:38:15.797Z|00201|bridge|INFO|bridge br0: deleted interface veth0e0576f3 on port 4\nTerminated\n
Feb 18 14:40:13.639 E ns/openshift-multus pod/multus-nt9qn node/ip-10-0-136-19.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 14:40:13.727 E ns/openshift-machine-config-operator pod/machine-config-daemon-nfsxq node/ip-10-0-136-19.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:40:16.700 E ns/openshift-multus pod/multus-nt9qn node/ip-10-0-136-19.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:40:23.540 E ns/openshift-machine-config-operator pod/machine-config-daemon-nfsxq node/ip-10-0-136-19.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 14:40:50.696 E ns/openshift-ingress pod/router-default-598949567d-5kn26 node/ip-10-0-141-240.us-west-1.compute.internal container=router container exited with code 2 (Error): 23] github.com/openshift/router/pkg/router/controller/factory/factory.go:115: Failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nE0218 14:39:44.145267       1 reflector.go:123] github.com/openshift/router/pkg/router/controller/factory/factory.go:115: Failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nE0218 14:39:47.217621       1 reflector.go:123] github.com/openshift/router/pkg/router/controller/factory/factory.go:115: Failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nI0218 14:40:12.724925       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:40:17.731454       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:40:23.384378       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:40:28.374914       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:40:34.582495       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:40:39.578663       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 14:40:49.387399       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 18 14:40:50.773 E ns/openshift-marketplace pod/community-operators-58498c6cbd-th7t9 node/ip-10-0-141-240.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Feb 18 14:40:52.113 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-240.us-west-1.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:40:52.113 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-240.us-west-1.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:40:52.113 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-240.us-west-1.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 14:41:14.788 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-8.us-west-1.compute.internal node/ip-10-0-133-8.us-west-1.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error): ers\nI0218 14:36:54.632886       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0218 14:36:54.633427       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0218 14:36:54.633508       1 controller_utils.go:1027] Waiting for caches to sync for cluster resource quota controller\nI0218 14:36:54.634904       1 resource_quota_monitor.go:301] QuotaMonitor running\nI0218 14:36:54.787564       1 controller_utils.go:1034] Caches are synced for namespace-security-allocation-controller controller\nI0218 14:36:54.815115       1 controller_utils.go:1034] Caches are synced for resource quota controller\nE0218 14:36:57.050806       1 namespace_scc_allocation_controller.go:214] the server is currently unable to handle the request (get rangeallocations.security.openshift.io scc-uid)\nE0218 14:36:57.051191       1 reflector.go:126] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\nE0218 14:36:57.051241       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0218 14:37:00.122171       1 reflector.go:126] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\nI0218 14:37:01.133826       1 controller_utils.go:1034] Caches are synced for cluster resource quota controller\nE0218 14:37:09.338633       1 namespace_scc_allocation_controller.go:214] the server is currently unable to handle the request (get rangeallocations.security.openshift.io scc-uid)\nE0218 14:37:18.554300       1 namespace_scc_allocation_controller.go:214] the server is currently unable to handle the request (get rangeallocations.security.openshift.io scc-uid)\n
Feb 18 14:41:14.788 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-8.us-west-1.compute.internal node/ip-10-0-133-8.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:37:41.991397       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:37:41.992226       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:37:52.000430       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:37:52.000752       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:38:02.019662       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:38:02.020186       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:38:12.027627       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:38:12.028102       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:38:22.035953       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:38:22.036392       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:38:32.044729       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:38:32.045184       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:38:42.054255       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:38:42.054664       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 14:38:52.061299       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:38:52.063323       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Feb 18 14:41:14.788 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-8.us-west-1.compute.internal node/ip-10-0-133-8.us-west-1.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): ck-client@1582035402" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582035402" (2020-02-18 13:16:41 +0000 UTC to 2021-02-17 13:16:41 +0000 UTC (now=2020-02-18 14:16:42.887657387 +0000 UTC))\nI0218 14:16:42.887716       1 secure_serving.go:178] Serving securely on [::]:10257\nI0218 14:16:42.887751       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0218 14:16:42.888910       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nE0218 14:16:47.545118       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found]\n
Feb 18 14:41:14.828 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-133-8.us-west-1.compute.internal node/ip-10-0-133-8.us-west-1.compute.internal container=scheduler container exited with code 2 (Error):     1 scheduler.go:667] pod openshift-cluster-node-tuning-operator/tuned-fgdbz is bound successfully on node "ip-10-0-141-240.us-west-1.compute.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804976Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 14:38:54.959736       1 scheduler.go:667] pod openshift-cluster-node-tuning-operator/tuned-vhp28 is bound successfully on node "ip-10-0-145-111.us-west-1.compute.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 14:38:55.031561       1 scheduler.go:667] pod openshift-cluster-node-tuning-operator/tuned-l2m9g is bound successfully on node "ip-10-0-149-108.us-west-1.compute.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 14:38:55.181907       1 scheduler.go:667] pod openshift-cluster-node-tuning-operator/tuned-lt2k8 is bound successfully on node "ip-10-0-133-37.us-west-1.compute.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 14:38:56.094464       1 scheduler.go:667] pod openshift-marketplace/certified-operators-6778ddf6cc-prwcc is bound successfully on node "ip-10-0-141-240.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804976Ki>|Pods<250>|StorageEphemeral<115455434152>.".\n
Feb 18 14:41:14.919 E ns/openshift-monitoring pod/node-exporter-wj9z2 node/ip-10-0-133-8.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 2-18T14:19:17Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T14:19:17Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 14:41:14.938 E ns/openshift-controller-manager pod/controller-manager-jlhmw node/ip-10-0-133-8.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 18 14:41:14.978 E ns/openshift-sdn pod/sdn-controller-bmb5p node/ip-10-0-133-8.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): 7.414701       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0218 14:22:57.438968       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"75049faf-b5be-40f3-8c6f-04fd7bec121d", ResourceVersion:"28299", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717630716, loc:(*time.Location)(0x2b77ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-133-8\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-18T13:51:56Z\",\"renewTime\":\"2020-02-18T14:22:57Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-133-8 became leader'\nI0218 14:22:57.439095       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0218 14:22:57.448700       1 master.go:51] Initializing SDN master\nI0218 14:22:57.467672       1 network_controller.go:60] Started OpenShift Network Controller\nW0218 14:35:49.472655       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 20682 (36248)\nW0218 14:35:49.486184       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 24760 (36248)\n
Feb 18 14:41:14.997 E ns/openshift-sdn pod/ovs-fz2ds node/ip-10-0-133-8.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): 0-02-18T14:38:42.720Z|00478|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:42.728Z|00479|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:42.783Z|00480|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:42.800Z|00481|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:45.372Z|00044|reconnect|WARN|unix#967: connection dropped (Connection reset by peer)\n2020-02-18T14:38:45.286Z|00482|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:45.313Z|00483|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:45.317Z|00484|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:45.323Z|00485|bridge|INFO|bridge br0: added interface veth4010171c on port 39\n2020-02-18T14:38:45.330Z|00486|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:45.355Z|00487|connmgr|INFO|br0<->unix#1104: 5 flow_mods in the last 0 s (5 adds)\n2020-02-18T14:38:45.410Z|00488|connmgr|INFO|br0<->unix#1108: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:38:45.412Z|00489|connmgr|INFO|br0<->unix#1110: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-18T14:38:48.114Z|00490|connmgr|INFO|br0<->unix#1116: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:38:48.183Z|00491|connmgr|INFO|br0<->unix#1119: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:38:48.214Z|00492|bridge|INFO|bridge br0: deleted interface veth4010171c on port 39\n2020-02-18T14:38:48.222Z|00493|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:48.226Z|00494|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:48.279Z|00495|bridge|WARN|could not open network device veth829eb847 (No such device)\n2020-02-18T14:38:48.285Z|00496|bridge|WARN|could not open network device veth829eb847 (No such device)\nTerminated\n
Feb 18 14:41:15.020 E ns/openshift-multus pod/multus-admission-controller-mg7f5 node/ip-10-0-133-8.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 18 14:41:15.074 E ns/openshift-multus pod/multus-bxwhg node/ip-10-0-133-8.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 14:41:15.161 E ns/openshift-machine-config-operator pod/machine-config-daemon-d8rwt node/ip-10-0-133-8.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:41:15.183 E ns/openshift-machine-config-operator pod/machine-config-server-h5fvd node/ip-10-0-133-8.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0218 14:32:24.468119       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0218 14:32:24.469281       1 api.go:51] Launching server on :22624\nI0218 14:32:24.469378       1 api.go:51] Launching server on :22623\n
Feb 18 14:41:15.232 E ns/openshift-cluster-node-tuning-operator pod/tuned-mblmr node/ip-10-0-133-8.us-west-1.compute.internal container=tuned container exited with code 143 (Error): Failed to execute operation: Unit file tuned.service does not exist.\nI0218 14:38:55.712001     542 openshift-tuned.go:209] Extracting tuned profiles\nI0218 14:38:55.715213     542 openshift-tuned.go:739] Resync period to pull node/pod labels: 54 [s]\nI0218 14:38:55.740331     542 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-mblmr) labels changed node wide: true\n
Feb 18 14:41:19.015 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-8.us-west-1.compute.internal node/ip-10-0-133-8.us-west-1.compute.internal container=kube-apiserver-5 container exited with code 1 (Error): tcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 14:38:56.987502       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0218 14:38:57.013146       1 cacher.go:771] cacher (*core.Pod): 3 objects queued in incoming channel.\nI0218 14:38:57.013177       1 cacher.go:771] cacher (*core.Pod): 4 objects queued in incoming channel.\nI0218 14:38:57.013191       1 cacher.go:771] cacher (*core.Pod): 5 objects queued in incoming channel.\nE0218 14:38:57.024724       1 watch.go:256] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*framer.lengthDelimitedFrameWriter)(0xc021339460), encoder:(*versioning.codec)(0xc01cc67c20), buf:(*bytes.Buffer)(0xc00ebad350)})\nI0218 14:38:57.154310       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-133-8.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0218 14:38:57.154541       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nE0218 14:38:57.249311       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}\nI0218 14:38:57.364550       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0218 14:38:57.364750       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0218 14:38:57.364931       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0218 14:38:57.367732       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0218 14:38:57.367832       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0218 14:38:57.367944       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\n
Feb 18 14:41:19.015 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-8.us-west-1.compute.internal node/ip-10-0-133-8.us-west-1.compute.internal container=kube-apiserver-insecure-readyz-5 container exited with code 2 (Error): I0218 14:15:37.160303       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 18 14:41:19.015 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-8.us-west-1.compute.internal node/ip-10-0-133-8.us-west-1.compute.internal container=kube-apiserver-cert-syncer-5 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 14:36:48.523644       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:36:48.524012       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 14:36:48.729936       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 14:36:48.730303       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 18 14:41:19.525 E ns/openshift-marketplace pod/redhat-operators-55f798ccf7-p5nm9 node/ip-10-0-149-108.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Feb 18 14:41:20.047 E ns/openshift-monitoring pod/node-exporter-wj9z2 node/ip-10-0-133-8.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:41:20.099 E ns/openshift-multus pod/multus-bxwhg node/ip-10-0-133-8.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:41:22.501 E ns/openshift-multus pod/multus-bxwhg node/ip-10-0-133-8.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:41:27.572 E ns/openshift-machine-config-operator pod/machine-config-daemon-d8rwt node/ip-10-0-133-8.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 14:41:35.568 E ns/openshift-marketplace pod/community-operators-554bd5d87c-6kgqz node/ip-10-0-149-108.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Feb 18 14:41:41.908 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator monitoring is reporting a failure: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Alertmanager host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io alertmanager-main)
Feb 18 14:43:18.652 E ns/openshift-monitoring pod/node-exporter-2qhnz node/ip-10-0-141-240.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 2-18T14:18:24Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T14:18:24Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 14:43:18.682 E ns/openshift-sdn pod/ovs-nt9s4 node/ip-10-0-141-240.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): 77|bridge|INFO|bridge br0: deleted interface veth55c009db on port 4\n2020-02-18T14:40:51.217Z|00178|connmgr|INFO|br0<->unix#947: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:40:51.266Z|00179|connmgr|INFO|br0<->unix#950: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:40:51.296Z|00180|bridge|INFO|bridge br0: deleted interface veth84557b76 on port 5\n2020-02-18T14:40:51.289Z|00028|jsonrpc|WARN|unix#850: receive error: Connection reset by peer\n2020-02-18T14:40:51.289Z|00029|reconnect|WARN|unix#850: connection dropped (Connection reset by peer)\n2020-02-18T14:41:19.909Z|00181|connmgr|INFO|br0<->unix#973: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:41:19.942Z|00182|connmgr|INFO|br0<->unix#976: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:41:19.970Z|00183|bridge|INFO|bridge br0: deleted interface veth3cb8e6d2 on port 6\n2020-02-18T14:41:20.009Z|00184|connmgr|INFO|br0<->unix#980: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:41:20.055Z|00185|connmgr|INFO|br0<->unix#983: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:41:20.086Z|00186|bridge|INFO|bridge br0: deleted interface veth364b0b3c on port 7\n2020-02-18T14:41:20.126Z|00187|connmgr|INFO|br0<->unix#986: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T14:41:20.170Z|00188|connmgr|INFO|br0<->unix#989: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T14:41:20.194Z|00189|bridge|INFO|bridge br0: deleted interface vethdbb41164 on port 15\n2020-02-18T14:41:19.962Z|00030|jsonrpc|WARN|unix#874: receive error: Connection reset by peer\n2020-02-18T14:41:19.963Z|00031|reconnect|WARN|unix#874: connection dropped (Connection reset by peer)\n2020-02-18T14:41:20.187Z|00032|jsonrpc|WARN|unix#885: receive error: Connection reset by peer\n2020-02-18T14:41:20.188Z|00033|reconnect|WARN|unix#885: connection dropped (Connection reset by peer)\n2020-02-18T14:41:23.868Z|00034|jsonrpc|WARN|unix#890: receive error: Connection reset by peer\n2020-02-18T14:41:23.868Z|00035|reconnect|WARN|unix#890: connection dropped (Connection reset by peer)\nTerminated\n
Feb 18 14:43:18.722 E ns/openshift-multus pod/multus-x4lw8 node/ip-10-0-141-240.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 14:43:18.741 E ns/openshift-cluster-node-tuning-operator pod/tuned-fgdbz node/ip-10-0-141-240.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 8 14:41:00,863 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-18 14:41:00,867 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-18 14:41:00,868 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-18 14:41:00,870 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-18 14:41:00,977 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-18 14:41:00,985 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0218 14:41:01.906094     640 openshift-tuned.go:550] Pod (openshift-ingress/router-default-598949567d-5kn26) labels changed node wide: true\nI0218 14:41:05.570646     640 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:41:05.572249     640 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:41:05.686212     640 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 14:41:22.133527     640 openshift-tuned.go:550] Pod (e2e-k8s-service-upgrade-7064/service-test-njv6c) labels changed node wide: true\nI0218 14:41:25.570670     640 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 14:41:25.572400     640 openshift-tuned.go:441] Getting recommended profile...\nI0218 14:41:25.691219     640 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 14:41:31.896025     640 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-7373/foo-rkd2w) labels changed node wide: true\n2020-02-18 14:41:32,546 INFO     tuned.daemon.controller: terminating controller\n2020-02-18 14:41:32,547 INFO     tuned.daemon.daemon: stopping tuning\nI0218 14:41:32.547119     640 openshift-tuned.go:137] Received signal: terminated\nI0218 14:41:32.547173     640 openshift-tuned.go:304] Sending TERM to PID 9680\n
Feb 18 14:43:18.752 E ns/openshift-machine-config-operator pod/machine-config-daemon-k892n node/ip-10-0-141-240.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 14:43:23.276 E ns/openshift-multus pod/multus-x4lw8 node/ip-10-0-141-240.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Feb 18 14:43:29.083 E ns/openshift-machine-config-operator pod/machine-config-daemon-k892n node/ip-10-0-141-240.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error):