ResultSUCCESS
Tests 4 failed / 20 succeeded
Started2020-04-22 20:05
Elapsed1h24m
Work namespaceci-op-1isycq45
Refs release-4.3:8e4367fb
128:d50ee209
pod965471af-84d4-11ea-b5a6-0a58ac104272
repoopenshift/cluster-node-tuning-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 36m39s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 4s of 32m10s (0%):

Apr 22 21:01:13.732 E ns/e2e-k8s-service-lb-available-1658 svc/service-test Service stopped responding to GET requests on reused connections
Apr 22 21:01:14.731 E ns/e2e-k8s-service-lb-available-1658 svc/service-test Service is not responding to GET requests on reused connections
Apr 22 21:01:14.789 I ns/e2e-k8s-service-lb-available-1658 svc/service-test Service started responding to GET requests on reused connections
Apr 22 21:10:16.732 E ns/e2e-k8s-service-lb-available-1658 svc/service-test Service stopped responding to GET requests on reused connections
Apr 22 21:10:16.789 I ns/e2e-k8s-service-lb-available-1658 svc/service-test Service started responding to GET requests on reused connections
Apr 22 21:16:22.732 E ns/e2e-k8s-service-lb-available-1658 svc/service-test Service stopped responding to GET requests over new connections
Apr 22 21:16:22.796 I ns/e2e-k8s-service-lb-available-1658 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1587590534.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 35m9s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 4m24s of 35m8s (13%):

Apr 22 20:57:48.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Apr 22 20:57:48.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 20:57:48.206 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 20:57:48.206 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Apr 22 20:58:36.103 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 22 20:58:36.210 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 22 21:00:42.103 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 22 21:00:42.232 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 22 21:01:13.104 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:01:13.104 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 22 21:01:13.209 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 21:01:13.227 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 22 21:01:20.103 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 22 21:01:21.103 - 13s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Apr 22 21:01:24.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:01:25.103 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Apr 22 21:01:32.103 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 22 21:01:33.103 - 8s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Apr 22 21:01:34.228 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 21:01:35.228 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 22 21:01:42.521 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 22 21:02:06.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:02:06.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Apr 22 21:02:06.103 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 22 21:02:07.103 - 2s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Apr 22 21:02:07.103 - 2s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Apr 22 21:02:07.103 - 2s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Apr 22 21:02:09.316 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Apr 22 21:02:09.316 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 21:02:09.325 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 22 21:02:11.211 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 22 21:02:11.323 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 22 21:02:52.103 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 22 21:02:52.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Apr 22 21:02:52.103 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 22 21:02:52.221 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 22 21:02:52.241 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Apr 22 21:02:53.103 - 3s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Apr 22 21:02:55.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:02:55.211 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 21:02:57.219 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 22 21:10:05.103 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 22 21:10:05.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:10:06.103 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Apr 22 21:10:06.103 - 19s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Apr 22 21:10:09.103 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 22 21:10:10.103 - 8s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Apr 22 21:10:15.221 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 21:10:19.218 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 22 21:10:25.234 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 22 21:10:26.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:10:26.211 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 21:10:37.103 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 22 21:10:37.103 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 22 21:10:37.234 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 22 21:10:37.237 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 22 21:10:38.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:10:39.103 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Apr 22 21:10:48.222 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 21:10:49.103 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 22 21:10:49.208 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 22 21:12:52.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:12:53.103 - 5s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Apr 22 21:12:53.103 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 22 21:12:53.239 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 22 21:12:59.482 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 21:15:47.103 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 22 21:15:47.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:15:47.103 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 22 21:15:48.103 - 29s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Apr 22 21:15:48.103 - 54s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Apr 22 21:15:48.103 - 54s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Apr 22 21:16:17.251 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 21:16:28.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:16:28.223 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Apr 22 21:16:42.231 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 22 21:16:42.232 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 22 21:18:50.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Apr 22 21:18:50.103 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 22 21:18:50.231 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Apr 22 21:18:50.232 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
				from junit_upgrade_1587590534.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 35m9s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1m25s of 35m8s (4%):

Apr 22 21:00:49.036 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 22 21:00:49.064 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:00:56.496 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:00:56.525 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:01:41.036 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 22 21:01:42.036 - 13s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:01:56.435 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:10:18.036 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 22 21:10:18.065 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:10:36.037 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 22 21:10:36.067 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:10:55.036 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 22 21:10:55.065 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:13:05.040 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: unexpected EOF
Apr 22 21:13:05.040 E kube-apiserver Kube API started failing: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: unexpected EOF
Apr 22 21:13:06.036 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:13:06.036 - 6s    E kube-apiserver Kube API is not responding to GET requests
Apr 22 21:13:06.300 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:13:12.838 I kube-apiserver Kube API started responding to GET requests
Apr 22 21:13:29.037 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 22 21:13:29.065 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:13:45.037 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 22 21:13:45.065 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:13:48.141 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:13:48.170 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:13:54.284 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:13:54.313 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:14:00.429 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:14:00.457 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:14:06.572 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:14:06.601 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:14:09.645 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:14:10.036 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:14:12.746 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:14:18.861 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:14:18.888 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:14:21.933 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:14:22.036 - 12s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:14:34.258 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:14:40.365 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:14:40.396 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:14:55.725 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:14:55.754 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:14:58.797 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:14:58.828 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:15:01.869 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:15:02.036 - 6s    E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:15:08.057 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:15:11.085 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Apr 22 21:15:12.036 - 8s    E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:15:20.337 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:16:09.095 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 3.23.27.15:6443: connect: connection refused
Apr 22 21:16:10.036 - 6s    E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:16:16.945 E kube-apiserver Kube API started failing: etcdserver: request timed out
Apr 22 21:16:16.973 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:16:17.002 I kube-apiserver Kube API started responding to GET requests
Apr 22 21:16:33.036 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 22 21:16:33.064 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:16:52.037 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 22 21:16:53.036 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:17:07.064 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:17:24.036 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 22 21:17:25.036 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:17:39.070 I openshift-apiserver OpenShift API started responding to GET requests
Apr 22 21:18:03.036 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 22 21:18:03.064 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1587590534.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 36m42s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
189 error level events were detected during this test run:

Apr 22 20:45:49.203 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-151.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-22T20:45:47.595Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-04-22T20:45:47.601Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-22T20:45:47.601Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-22T20:45:47.602Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-22T20:45:47.602Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-04-22T20:45:47.602Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-22T20:45:47.602Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-22T20:45:47.602Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-22T20:45:47.602Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-22T20:45:47.602Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-22T20:45:47.602Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-22T20:45:47.602Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-22T20:45:47.603Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-04-22T20:45:47.603Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-22T20:45:47.603Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-22T20:45:47.603Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-04-22
Apr 22 20:45:54.528 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-150.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/04/22 20:44:48 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 22 20:45:54.528 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-150.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/04/22 20:44:51 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/22 20:44:51 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/22 20:44:51 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/22 20:44:51 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/22 20:44:51 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/22 20:44:51 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/22 20:44:51 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/22 20:44:51 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/22 20:44:51 http.go:106: HTTPS: listening on [::]:9091\n
Apr 22 20:45:54.528 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-150.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-22T20:44:48.000347519Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-04-22T20:44:48.000456787Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-04-22T20:44:48.001797819Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-04-22T20:44:53.002311556Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-22T20:44:58.106957459Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 22 20:46:00.544 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-150.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-22T20:45:58.325Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-04-22T20:45:58.331Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-22T20:45:58.331Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-22T20:45:58.333Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-22T20:45:58.333Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-04-22T20:45:58.333Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-22T20:45:58.333Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-22T20:45:58.333Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-22T20:45:58.333Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-22T20:45:58.333Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-22T20:45:58.333Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-22T20:45:58.333Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-04-22T20:45:58.333Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-22T20:45:58.333Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-22T20:45:58.334Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-22T20:45:58.334Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-04-22
Apr 22 20:48:43.540 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 508)
Apr 22 20:48:57.534 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-bdc779959-zr7mg node/ip-10-0-132-141.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): -0-144-25.us-east-2.compute.internal container=\"kube-apiserver-5\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0422 20:46:35.194374       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"d10a96f9-ff11-47d0-8179-b31fb90f64e6", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-5 -n openshift-kube-apiserver: cause by changes in data.status\nI0422 20:46:43.202321       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"d10a96f9-ff11-47d0-8179-b31fb90f64e6", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-5-ip-10-0-144-25.us-east-2.compute.internal -n openshift-kube-apiserver because it was missing\nW0422 20:48:21.621183       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 17762 (18236)\nW0422 20:48:27.285975       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18237 (18258)\nW0422 20:48:44.256583       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18258 (18352)\nW0422 20:48:54.595552       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18352 (18433)\nI0422 20:48:56.411764       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0422 20:48:56.411841       1 leaderelection.go:66] leaderelection lost\nF0422 20:48:56.436382       1 builder.go:217] server exited\n
Apr 22 20:50:21.769 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-6bdf75698c-4snh4 node/ip-10-0-132-141.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): lector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 15590 (17315)\nW0422 20:46:15.709815       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 9909 (15825)\nW0422 20:46:15.792978       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 12416 (15616)\nW0422 20:46:15.797236       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 12918 (15813)\nW0422 20:46:15.798687       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 16642 (16726)\nW0422 20:46:15.800005       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 15612 (17347)\nW0422 20:48:21.590435       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 17762 (18236)\nW0422 20:48:27.290386       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18237 (18258)\nW0422 20:48:44.322755       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18258 (18359)\nW0422 20:48:54.598235       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18359 (18434)\nI0422 20:50:21.072338       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0422 20:50:21.072407       1 leaderelection.go:66] leaderelection lost\n
Apr 22 20:51:59.111 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-5966bf8fc9-hhn26 node/ip-10-0-132-141.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): s/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18237 (18258)\nW0422 20:48:44.258322       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18258 (18352)\nW0422 20:48:54.595187       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18352 (18433)\nI0422 20:51:58.094293       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0422 20:51:58.094519       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0422 20:51:58.094544       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0422 20:51:58.094560       1 resourcesync_controller.go:227] Shutting down ResourceSyncController\nI0422 20:51:58.094574       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0422 20:51:58.094590       1 state_controller.go:171] Shutting down EncryptionStateController\nI0422 20:51:58.094604       1 prune_controller.go:231] Shutting down PruneController\nI0422 20:51:58.094617       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0422 20:51:58.094631       1 revision_controller.go:346] Shutting down RevisionController\nI0422 20:51:58.094645       1 logging_controller.go:92] Shutting down LogLevelController\nI0422 20:51:58.094658       1 unsupportedconfigoverrides_controller.go:161] Shutting down UnsupportedConfigOverridesController\nI0422 20:51:58.094671       1 config_observer_controller.go:159] Shutting down ConfigObserver\nI0422 20:51:58.094685       1 status_controller.go:211] Shutting down StatusSyncer-openshift-apiserver\nI0422 20:51:58.094701       1 finalizer_controller.go:134] Shutting down FinalizerController\nF0422 20:51:58.094728       1 builder.go:217] server exited\nI0422 20:51:58.101041       1 workload_controller.go:192] Shutting down OpenShiftAPIServerOperator\n
Apr 22 20:53:18.148 E ns/openshift-cluster-node-tuning-operator pod/tuned-4d8th node/ip-10-0-154-151.us-east-2.compute.internal container=tuned container exited with code 143 (Error): ice-test-vzp6h) labels changed node wide: true\nI0422 20:45:41.325713   17091 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:45:41.338269   17091 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:45:41.505853   17091 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 20:45:42.426944   17091 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-9c94f4c48-c55zr) labels changed node wide: true\nI0422 20:45:46.325820   17091 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:45:46.327610   17091 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:45:46.437459   17091 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 20:45:48.297850   17091 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-2530/dp-857d95bf59-t85sc) labels changed node wide: true\nI0422 20:45:51.325758   17091 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:45:51.327510   17091 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:45:51.440445   17091 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 20:46:04.800648   17091 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-b48694ccf-6ksrj) labels changed node wide: true\nI0422 20:46:06.325710   17091 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:46:06.327204   17091 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:46:06.437578   17091 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nE0422 20:51:56.996173   17091 openshift-tuned.go:881] Pod event watch channel closed.\nI0422 20:51:56.996197   17091 openshift-tuned.go:883] Increasing resyncPeriod to 126\n
Apr 22 20:53:18.548 E ns/openshift-cluster-node-tuning-operator pod/tuned-m8xks node/ip-10-0-130-132.us-east-2.compute.internal container=tuned container exited with code 143 (Error): nged node wide: true\nI0422 20:45:26.206760   21430 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:45:26.209219   21430 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:45:26.321885   21430 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 20:45:32.107145   21430 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-9c94f4c48-r9xdx) labels changed node wide: true\nI0422 20:45:36.206706   21430 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:45:36.208189   21430 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:45:36.388424   21430 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 20:45:45.761862   21430 openshift-tuned.go:550] Pod (e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-8213/pod-configmap-de56d5f1-ead3-4192-992d-e50d333733bb) labels changed node wide: false\nI0422 20:45:50.001968   21430 openshift-tuned.go:550] Pod (e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-8213/pod-configmap-de56d5f1-ead3-4192-992d-e50d333733bb) labels changed node wide: false\nI0422 20:45:53.733918   21430 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-b48694ccf-d8lbn) labels changed node wide: true\nI0422 20:45:56.206676   21430 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:45:56.208386   21430 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:45:56.323301   21430 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 20:51:56.991150   21430 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0422 20:51:56.995349   21430 openshift-tuned.go:881] Pod event watch channel closed.\nI0422 20:51:56.995370   21430 openshift-tuned.go:883] Increasing resyncPeriod to 110\n
Apr 22 20:53:58.940 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-212.us-east-2.compute.internal node/ip-10-0-136-212.us-east-2.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): 0] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: unknown\nE0422 20:53:57.371786       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: unknown\nE0422 20:53:57.371809       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: unknown\nE0422 20:53:57.371832       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: unknown\nE0422 20:53:57.371853       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: unknown\nE0422 20:53:57.371879       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: unknown\nE0422 20:53:57.371901       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: unknown\nW0422 20:53:57.420709       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodSecurityPolicy ended with: too old resource version: 14744 (20563)\nW0422 20:53:57.450936       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: too old resource version: 17379 (20555)\nW0422 20:53:57.690630       1 reflector.go:299] k8s.io/client-go/metadata/metadatainformer/informer.go:89: watch of *v1.PartialObjectMetadata ended with: too old resource version: 15601 (20604)\nW0422 20:53:57.760367       1 reflector.go:299] k8s.io/client-go/metadata/metadatainformer/informer.go:89: watch of *v1.PartialObjectMetadata ended with: too old resource version: 20499 (20605)\nI0422 20:53:58.213425       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded\nF0422 20:53:58.213664       1 controllermanager.go:291] leaderelection lost\n
Apr 22 20:54:52.766 E ns/openshift-machine-api pod/machine-api-controllers-b98896477-hbngc node/ip-10-0-136-212.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 22 20:55:33.886 E ns/openshift-monitoring pod/openshift-state-metrics-84d9d4b47c-bxr6v node/ip-10-0-130-132.us-east-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Apr 22 20:55:35.891 E ns/openshift-monitoring pod/kube-state-metrics-5bb89767d9-9zgst node/ip-10-0-130-132.us-east-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 22 20:55:45.644 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-7bb9wzvs4 node/ip-10-0-144-25.us-east-2.compute.internal container=operator container exited with code 255 (Error):  (8.289405ms) 200 [Prometheus/2.14.0 10.129.2.17:54482]\nI0422 20:54:14.064400       1 httplog.go:90] GET /metrics: (7.025635ms) 200 [Prometheus/2.14.0 10.128.2.17:46524]\nI0422 20:54:22.774009       1 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync\nI0422 20:54:29.948811       1 httplog.go:90] GET /metrics: (8.225767ms) 200 [Prometheus/2.14.0 10.129.2.17:54482]\nI0422 20:54:44.064625       1 httplog.go:90] GET /metrics: (7.202504ms) 200 [Prometheus/2.14.0 10.128.2.17:46524]\nI0422 20:54:59.950861       1 httplog.go:90] GET /metrics: (10.274937ms) 200 [Prometheus/2.14.0 10.129.2.17:54482]\nI0422 20:55:02.412288       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 25 items received\nW0422 20:55:02.649478       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 21232 (21254)\nI0422 20:55:03.649797       1 reflector.go:158] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0422 20:55:05.154307       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 0 items received\nW0422 20:55:05.245572       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 21254 (21268)\nI0422 20:55:06.246230       1 reflector.go:158] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0422 20:55:14.064553       1 httplog.go:90] GET /metrics: (7.011787ms) 200 [Prometheus/2.14.0 10.128.2.17:46524]\nI0422 20:55:28.749237       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0422 20:55:28.749297       1 leaderelection.go:66] leaderelection lost\n
Apr 22 20:55:45.699 E ns/openshift-insights pod/insights-operator-67c5585799-nbnq5 node/ip-10-0-144-25.us-east-2.compute.internal container=operator container exited with code 2 (Error): T /metrics: (1.689264ms) 200 [Prometheus/2.14.0 10.129.2.17:43730]\nI0422 20:52:44.699810       1 httplog.go:90] GET /metrics: (8.433244ms) 200 [Prometheus/2.14.0 10.128.2.17:43576]\nI0422 20:52:50.566911       1 httplog.go:90] GET /metrics: (1.674397ms) 200 [Prometheus/2.14.0 10.129.2.17:43730]\nI0422 20:53:14.700389       1 httplog.go:90] GET /metrics: (9.020907ms) 200 [Prometheus/2.14.0 10.128.2.17:43576]\nI0422 20:53:20.567077       1 httplog.go:90] GET /metrics: (1.816692ms) 200 [Prometheus/2.14.0 10.129.2.17:43730]\nI0422 20:53:21.613356       1 configobserver.go:65] Refreshing configuration from cluster pull secret\nI0422 20:53:21.617630       1 configobserver.go:90] Found cloud.openshift.com token\nI0422 20:53:21.617668       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0422 20:53:21.635483       1 status.go:298] The operator is healthy\nI0422 20:53:21.635559       1 status.go:373] No status update necessary, objects are identical\nI0422 20:53:44.701235       1 httplog.go:90] GET /metrics: (9.981102ms) 200 [Prometheus/2.14.0 10.128.2.17:43576]\nI0422 20:53:50.573967       1 httplog.go:90] GET /metrics: (2.409517ms) 200 [Prometheus/2.14.0 10.129.2.17:43730]\nI0422 20:54:14.707511       1 httplog.go:90] GET /metrics: (16.271241ms) 200 [Prometheus/2.14.0 10.128.2.17:43576]\nI0422 20:54:20.566806       1 httplog.go:90] GET /metrics: (1.580041ms) 200 [Prometheus/2.14.0 10.129.2.17:43730]\nI0422 20:54:44.700000       1 httplog.go:90] GET /metrics: (8.748796ms) 200 [Prometheus/2.14.0 10.128.2.17:43576]\nI0422 20:54:50.566759       1 httplog.go:90] GET /metrics: (1.580705ms) 200 [Prometheus/2.14.0 10.129.2.17:43730]\nI0422 20:55:14.700275       1 httplog.go:90] GET /metrics: (8.974384ms) 200 [Prometheus/2.14.0 10.128.2.17:43576]\nI0422 20:55:20.567449       1 httplog.go:90] GET /metrics: (2.205785ms) 200 [Prometheus/2.14.0 10.129.2.17:43730]\nI0422 20:55:21.634400       1 status.go:298] The operator is healthy\nI0422 20:55:21.634497       1 status.go:373] No status update necessary, objects are identical\n
Apr 22 20:55:55.749 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-144-25.us-east-2.compute.internal node/ip-10-0-144-25.us-east-2.compute.internal container=scheduler container exited with code 255 (Error):       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0422 20:55:54.648696       1 leaderelection.go:330] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0422 20:55:54.648873       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0422 20:55:54.875385       1 reflector.go:280] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0422 20:55:54.884472       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)\nE0422 20:55:54.884532       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0422 20:55:54.884561       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0422 20:55:54.884797       1 webhook.go:107] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:kube-scheduler" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\nE0422 20:55:54.884824       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, tokenreviews.authentication.k8s.io is forbidden: User "system:kube-scheduler" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope]\nI0422 20:55:55.395891       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0422 20:55:55.396283       1 server.go:264] leaderelection lost\n
Apr 22 20:55:56.765 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-144-25.us-east-2.compute.internal node/ip-10-0-144-25.us-east-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): ource version: 22699 (22864)\nW0422 20:55:54.991256       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.NetworkPolicy ended with: too old resource version: 17686 (22883)\nW0422 20:55:55.014284       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: too old resource version: 17684 (22870)\nW0422 20:55:55.014576       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 22690 (22913)\nE0422 20:55:55.014904       1 reflector.go:280] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.TemplateInstance: the server could not find the requested resource (get templateinstances.template.openshift.io)\nW0422 20:55:55.015175       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 17684 (22869)\nE0422 20:55:55.041978       1 reflector.go:280] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: the server could not find the requested resource (get templates.template.openshift.io)\nW0422 20:55:55.403499       1 reflector.go:299] k8s.io/client-go/metadata/metadatainformer/informer.go:89: watch of *v1.PartialObjectMetadata ended with: too old resource version: 17718 (23196)\nW0422 20:55:55.494029       1 reflector.go:299] k8s.io/client-go/metadata/metadatainformer/informer.go:89: watch of *v1.PartialObjectMetadata ended with: too old resource version: 17731 (23201)\nW0422 20:55:55.632549       1 reflector.go:299] k8s.io/client-go/metadata/metadatainformer/informer.go:89: watch of *v1.PartialObjectMetadata ended with: too old resource version: 18160 (23203)\nI0422 20:55:55.683687       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded\nF0422 20:55:55.683892       1 controllermanager.go:291] leaderelection lost\n
Apr 22 20:56:06.583 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-151.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/04/22 20:45:48 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 22 20:56:06.583 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-151.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/04/22 20:45:48 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/22 20:45:48 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/22 20:45:48 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/22 20:45:48 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/22 20:45:48 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/22 20:45:48 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/22 20:45:48 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/22 20:45:48 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/22 20:45:48 http.go:106: HTTPS: listening on [::]:9091\n
Apr 22 20:56:06.583 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-151.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-22T20:45:47.969054628Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-04-22T20:45:47.969196619Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-04-22T20:45:47.970569227Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-22T20:45:53.086828065Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 22 20:56:06.860 E ns/openshift-monitoring pod/node-exporter-fvrx6 node/ip-10-0-143-150.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 4-22T20:39:22Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-22T20:39:22Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 22 20:56:07.572 E ns/openshift-monitoring pod/thanos-querier-9c94f4c48-c55zr node/ip-10-0-154-151.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/04/22 20:45:45 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/22 20:45:45 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/22 20:45:45 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/22 20:45:45 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/22 20:45:45 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/22 20:45:45 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/22 20:45:45 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/22 20:45:45 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/22 20:45:45 http.go:106: HTTPS: listening on [::]:9091\n
Apr 22 20:56:07.759 E ns/openshift-ingress pod/router-default-7cfc5c9745-s46lw node/ip-10-0-154-151.us-east-2.compute.internal container=router container exited with code 2 (Error): p://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0422 20:55:05.148654       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nE0422 20:55:14.902077       1 limiter.go:140] error reloading router: wait: no child processes\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nI0422 20:55:19.874973       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0422 20:55:24.882054       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0422 20:55:29.878693       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0422 20:55:34.906647       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0422 20:55:39.878371       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0422 20:55:44.888185       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0422 20:55:56.815613       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0422 20:56:05.732526       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Apr 22 20:56:07.821 E ns/openshift-monitoring pod/prometheus-adapter-fcb49585f-gsrg6 node/ip-10-0-143-150.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0422 20:44:37.303541       1 adapter.go:93] successfully using in-cluster auth\nI0422 20:44:37.722004       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 22 20:56:10.507 E ns/openshift-cluster-node-tuning-operator pod/tuned-5sf4h node/ip-10-0-136-212.us-east-2.compute.internal container=tuned container exited with code 143 (Error): t-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 20:55:21.302790   56887 openshift-tuned.go:550] Pod (openshift-cloud-credential-operator/cloud-credential-operator-5f85689568-fjbtw) labels changed node wide: true\nI0422 20:55:24.628540   56887 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:55:24.630523   56887 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:55:24.870064   56887 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 20:55:24.870143   56887 openshift-tuned.go:550] Pod (openshift-image-registry/cluster-image-registry-operator-6b596647-msd4h) labels changed node wide: true\nI0422 20:55:29.628269   56887 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:55:29.630722   56887 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:55:29.871291   56887 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 20:55:33.640141   56887 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-6-ip-10-0-136-212.us-east-2.compute.internal) labels changed node wide: false\nI0422 20:55:37.375824   56887 openshift-tuned.go:550] Pod (openshift-cluster-samples-operator/cluster-samples-operator-d77896f84-6p8c5) labels changed node wide: true\nI0422 20:55:39.627541   56887 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:55:39.632924   56887 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:55:39.862339   56887 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0422 20:55:44.205016   56887 openshift-tuned.go:881] Pod event watch channel closed.\nI0422 20:55:44.205077   56887 openshift-tuned.go:883] Increasing resyncPeriod to 114\n
Apr 22 20:56:10.692 E ns/openshift-cluster-node-tuning-operator pod/tuned-lwpxs node/ip-10-0-132-141.us-east-2.compute.internal container=tuned container exited with code 143 (Error): not trigger profile reload.\nI0422 20:54:40.795533   56971 openshift-tuned.go:550] Pod (openshift-machine-api/machine-api-controllers-78bc64cb6f-qr54j) labels changed node wide: true\nI0422 20:54:44.623272   56971 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:54:44.624792   56971 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:54:44.767752   56971 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 20:55:17.126648   56971 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-5dbb6d6d46-5qs7k) labels changed node wide: true\nI0422 20:55:19.623228   56971 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:55:19.624692   56971 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:55:19.766310   56971 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 20:55:20.740769   56971 openshift-tuned.go:550] Pod (openshift-cluster-machine-approver/machine-approver-69c5458b7b-tgm2q) labels changed node wide: true\nI0422 20:55:24.623294   56971 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:55:24.625111   56971 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:55:24.901224   56971 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 20:55:40.722219   56971 openshift-tuned.go:550] Pod (openshift-cloud-credential-operator/cloud-credential-operator-8494d6cb5f-x9mpt) labels changed node wide: true\nI0422 20:55:44.203241   56971 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0422 20:55:44.208976   56971 openshift-tuned.go:881] Pod event watch channel closed.\nI0422 20:55:44.208996   56971 openshift-tuned.go:883] Increasing resyncPeriod to 130\n
Apr 22 20:56:10.806 E ns/openshift-cluster-node-tuning-operator pod/tuned-526ts node/ip-10-0-143-150.us-east-2.compute.internal container=tuned container exited with code 143 (Error): ift-node)\nI0422 20:53:24.622599   40861 openshift-tuned.go:263] Starting tuned...\n2020-04-22 20:53:24,745 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-22 20:53:24,751 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-22 20:53:24,752 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-22 20:53:24,754 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\nI0422 20:53:24.754867   40861 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-dqqgd) labels changed node wide: false\n2020-04-22 20:53:24,755 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-22 20:53:24,790 INFO     tuned.daemon.controller: starting controller\n2020-04-22 20:53:24,790 INFO     tuned.daemon.daemon: starting tuning\n2020-04-22 20:53:24,795 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-22 20:53:24,796 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-22 20:53:24,799 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-22 20:53:24,801 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-22 20:53:24,802 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-22 20:53:24,908 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-22 20:53:24,909 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0422 20:55:41.991535   40861 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-694cc7f5d8-zvzmb) labels changed node wide: true\nI0422 20:55:44.225034   40861 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0422 20:55:44.228231   40861 openshift-tuned.go:881] Pod event watch channel closed.\nI0422 20:55:44.228249   40861 openshift-tuned.go:883] Increasing resyncPeriod to 134\n
Apr 22 20:56:10.861 E ns/openshift-cluster-node-tuning-operator pod/tuned-6xpwn node/ip-10-0-144-25.us-east-2.compute.internal container=tuned container exited with code 143 (Error): 656   57338 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:55:19.778459   57338 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:55:19.908337   57338 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 20:55:20.070882   57338 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/olm-operator-56dcdbf879-5qxwn) labels changed node wide: true\nI0422 20:55:24.776670   57338 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:55:24.779303   57338 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:55:24.970759   57338 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 20:55:25.383781   57338 openshift-tuned.go:550] Pod (openshift-marketplace/marketplace-operator-79cc6cfd6b-swkvs) labels changed node wide: true\nI0422 20:55:29.777383   57338 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:55:29.780237   57338 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:55:30.147244   57338 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 20:55:36.899782   57338 openshift-tuned.go:550] Pod (openshift-service-ca-operator/service-ca-operator-598747c55d-wwd9z) labels changed node wide: true\nI0422 20:55:39.779470   57338 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 20:55:39.782917   57338 openshift-tuned.go:441] Getting recommended profile...\nI0422 20:55:39.956076   57338 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0422 20:55:44.216265   57338 openshift-tuned.go:881] Pod event watch channel closed.\nI0422 20:55:44.216352   57338 openshift-tuned.go:883] Increasing resyncPeriod to 134\n
Apr 22 20:56:12.554 E ns/openshift-monitoring pod/prometheus-adapter-fcb49585f-twhsr node/ip-10-0-154-151.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0422 20:44:37.669530       1 adapter.go:93] successfully using in-cluster auth\nI0422 20:44:39.011845       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 22 20:56:43.182 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-151.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-04-22T20:56:12.099Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-22T20:56:12.100Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-22T20:56:12.105Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-22T20:56:12.106Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-22T20:56:12.106Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-04-22T20:56:12.106Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-22T20:56:12.106Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-22T20:56:12.106Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-22T20:56:12.106Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-22T20:56:12.106Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-22T20:56:12.107Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-22T20:56:12.107Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-22T20:56:12.107Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-22T20:56:12.107Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-04-22T20:56:12.107Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-22T20:56:12.107Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-04-22
Apr 22 20:56:46.705 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-150.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/04/22 20:45:59 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 22 20:56:46.705 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-150.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/04/22 20:45:59 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/22 20:45:59 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/22 20:45:59 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/22 20:45:59 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/22 20:45:59 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/22 20:45:59 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/22 20:45:59 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/22 20:45:59 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/22 20:45:59 http.go:106: HTTPS: listening on [::]:9091\n2020/04/22 20:50:05 oauthproxy.go:774: basicauth: 10.129.2.8:56698 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/22 20:51:03 oauthproxy.go:774: basicauth: 10.129.0.24:45592 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/22 20:54:36 oauthproxy.go:774: basicauth: 10.129.2.8:58890 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/22 20:55:53 oauthproxy.go:774: basicauth: 10.131.0.23:35530 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/22 20:56:14 oauthproxy.go:774: basicauth: 10.129.0.59:41308 Authorization header does not start with 'Basic', skipping basic authentication\n
Apr 22 20:56:46.705 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-150.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-22T20:45:58.706085229Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-04-22T20:45:58.706232576Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-04-22T20:45:58.709194259Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-22T20:46:03.834353599Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 22 20:56:54.411 E ns/openshift-controller-manager pod/controller-manager-b6wnr node/ip-10-0-136-212.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 22 20:57:02.152 E ns/openshift-monitoring pod/node-exporter-45hnx node/ip-10-0-144-25.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 4-22T20:39:23Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-22T20:39:23Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 22 20:57:03.878 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-150.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-04-22T20:56:59.241Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-22T20:56:59.242Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-22T20:56:59.244Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-22T20:56:59.247Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-22T20:56:59.247Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-04-22T20:56:59.247Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-22T20:56:59.248Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-22T20:56:59.248Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-22T20:56:59.248Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-22T20:56:59.248Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-22T20:56:59.248Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-22T20:56:59.248Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-22T20:56:59.248Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-22T20:56:59.248Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-04-22T20:56:59.250Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-22T20:56:59.250Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-04-22
Apr 22 20:57:13.947 E ns/openshift-marketplace pod/certified-operators-76589b4b69-vvsf4 node/ip-10-0-130-132.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 22 20:57:15.942 E ns/openshift-marketplace pod/community-operators-656cbf4f48-pm7kf node/ip-10-0-130-132.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 22 20:57:17.508 E ns/openshift-service-ca pod/service-serving-cert-signer-7fb7597577-5kmx4 node/ip-10-0-136-212.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Apr 22 20:57:31.350 E ns/openshift-controller-manager pod/controller-manager-c87t9 node/ip-10-0-144-25.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Apr 22 20:58:16.485 E ns/openshift-console pod/console-77f6565d5-kdqxv node/ip-10-0-144-25.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/04/22 20:43:02 cmd/main: cookies are secure!\n2020/04/22 20:43:02 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/22 20:43:12 cmd/main: Binding to [::]:8443...\n2020/04/22 20:43:12 cmd/main: using TLS\n2020/04/22 20:56:36 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Apr 22 20:58:26.759 E ns/openshift-console pod/console-77f6565d5-9phqt node/ip-10-0-136-212.us-east-2.compute.internal container=console container exited with code 2 (Error): 0/04/22 20:41:29 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/22 20:41:39 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/22 20:41:49 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/22 20:41:59 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/22 20:42:09 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/22 20:42:19 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/22 20:42:29 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/22 20:42:39 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/04/22 20:42:49 cmd/main: Binding to [::]:8443...\n2020/04/22 20:42:49 cmd/main: using TLS\n2020/04/22 20:57:43 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Apr 22 21:00:36.630 E ns/openshift-sdn pod/sdn-controller-w2tff node/ip-10-0-132-141.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0422 20:30:44.274776       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 22 21:00:42.657 E ns/openshift-sdn pod/sdn-pnnm9 node/ip-10-0-132-141.us-east-2.compute.internal container=sdn container exited with code 255 (Error): processing 0 service events\nI0422 20:58:26.362614    2622 proxier.go:350] userspace syncProxyRules took 32.050734ms\nI0422 20:58:45.062415    2622 pod.go:539] CNI_DEL openshift-controller-manager/controller-manager-6rhv6\nI0422 20:58:51.352332    2622 pod.go:503] CNI_ADD openshift-controller-manager/controller-manager-76krz got IP 10.130.0.56, ofport 57\nI0422 20:58:55.249262    2622 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-controller-manager/controller-manager:https to [10.128.0.66:8443 10.129.0.69:8443 10.130.0.56:8443]\nI0422 20:58:55.249302    2622 roundrobin.go:218] Delete endpoint 10.130.0.56:8443 for service "openshift-controller-manager/controller-manager:https"\nI0422 20:58:55.402004    2622 proxier.go:371] userspace proxy: processing 0 service events\nI0422 20:58:55.402030    2622 proxier.go:350] userspace syncProxyRules took 29.434524ms\nI0422 20:59:25.548141    2622 proxier.go:371] userspace proxy: processing 0 service events\nI0422 20:59:25.548166    2622 proxier.go:350] userspace syncProxyRules took 32.395777ms\nI0422 20:59:55.703912    2622 proxier.go:371] userspace proxy: processing 0 service events\nI0422 20:59:55.703940    2622 proxier.go:350] userspace syncProxyRules took 29.292124ms\nI0422 21:00:25.855274    2622 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:00:25.855307    2622 proxier.go:350] userspace syncProxyRules took 33.08213ms\nI0422 21:00:29.615657    2622 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.5:6443 10.130.0.15:6443]\nI0422 21:00:29.615700    2622 roundrobin.go:218] Delete endpoint 10.129.0.5:6443 for service "openshift-multus/multus-admission-controller:"\nI0422 21:00:29.797768    2622 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:00:29.797796    2622 proxier.go:350] userspace syncProxyRules took 30.237734ms\nF0422 21:00:42.221101    2622 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 22 21:00:48.024 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:01:00.160 E ns/openshift-multus pod/multus-dbzz2 node/ip-10-0-144-25.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 22 21:01:07.527 E ns/openshift-sdn pod/sdn-cl5xt node/ip-10-0-130-132.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 0:59:55.674876    2084 proxier.go:371] userspace proxy: processing 0 service events\nI0422 20:59:55.674899    2084 proxier.go:350] userspace syncProxyRules took 27.830882ms\nI0422 21:00:25.813321    2084 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:00:25.813345    2084 proxier.go:350] userspace syncProxyRules took 28.237311ms\nI0422 21:00:29.617239    2084 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.5:6443 10.130.0.15:6443]\nI0422 21:00:29.617313    2084 roundrobin.go:218] Delete endpoint 10.129.0.5:6443 for service "openshift-multus/multus-admission-controller:"\nI0422 21:00:29.755888    2084 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:00:29.755914    2084 proxier.go:350] userspace syncProxyRules took 28.044129ms\nI0422 21:00:55.490327    2084 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-apiserver/api:https to [10.128.0.52:8443 10.129.0.49:8443]\nI0422 21:00:55.490364    2084 roundrobin.go:218] Delete endpoint 10.130.0.47:8443 for service "openshift-apiserver/api:https"\nI0422 21:00:55.621534    2084 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:00:55.621562    2084 proxier.go:350] userspace syncProxyRules took 27.67825ms\nI0422 21:01:00.103617    2084 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-apiserver/api:https to [10.128.0.52:8443 10.129.0.49:8443 10.130.0.47:8443]\nI0422 21:01:00.103654    2084 roundrobin.go:218] Delete endpoint 10.130.0.47:8443 for service "openshift-apiserver/api:https"\nI0422 21:01:00.248221    2084 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:01:00.248260    2084 proxier.go:350] userspace syncProxyRules took 35.193299ms\nI0422 21:01:06.590887    2084 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0422 21:01:06.590944    2084 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 22 21:01:36.349 E ns/openshift-sdn pod/sdn-49d2n node/ip-10-0-136-212.us-east-2.compute.internal container=sdn container exited with code 255 (Error): t-ingress/router-internal-default:metrics to [10.128.2.18:1936]\nI0422 21:01:30.384870    2340 roundrobin.go:218] Delete endpoint 10.131.0.27:1936 for service "openshift-ingress/router-internal-default:metrics"\nI0422 21:01:30.384883    2340 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-ingress/router-internal-default:https to [10.128.2.18:443]\nI0422 21:01:30.384894    2340 roundrobin.go:218] Delete endpoint 10.131.0.27:443 for service "openshift-ingress/router-internal-default:https"\nI0422 21:01:30.385489    2340 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-ingress/router-default:http to [10.128.2.18:80]\nI0422 21:01:30.385514    2340 roundrobin.go:218] Delete endpoint 10.131.0.27:80 for service "openshift-ingress/router-default:http"\nI0422 21:01:30.385524    2340 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-ingress/router-default:https to [10.128.2.18:443]\nI0422 21:01:30.385531    2340 roundrobin.go:218] Delete endpoint 10.131.0.27:443 for service "openshift-ingress/router-default:https"\nI0422 21:01:30.553706    2340 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:01:30.553740    2340 proxier.go:350] userspace syncProxyRules took 32.312042ms\nI0422 21:01:30.694232    2340 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:01:30.694277    2340 proxier.go:350] userspace syncProxyRules took 33.082235ms\nI0422 21:01:33.998653    2340 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-image-registry/image-registry:5000-tcp to [10.131.0.22:5000]\nI0422 21:01:33.998688    2340 roundrobin.go:218] Delete endpoint 10.131.0.22:5000 for service "openshift-image-registry/image-registry:5000-tcp"\nI0422 21:01:34.149105    2340 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:01:34.149130    2340 proxier.go:350] userspace syncProxyRules took 29.872617ms\nF0422 21:01:35.336861    2340 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 22 21:01:37.450 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-5f85689568-fjbtw node/ip-10-0-136-212.us-east-2.compute.internal container=manager container exited with code 1 (Error): achine-api-gcp secret=openshift-machine-api/gcp-cloud-credentials\ntime="2020-04-22T20:55:46Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-gcp secret=openshift-machine-api/gcp-cloud-credentials\ntime="2020-04-22T20:55:46Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-gcp secret=openshift-machine-api/gcp-cloud-credentials\ntime="2020-04-22T20:55:46Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-04-22T20:55:46Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-04-22T20:55:46Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-04-22T20:55:46Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-04-22T20:55:46Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-04-22T20:55:46Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-04-22T20:55:47Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\ntime="2020-04-22T20:57:46Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-04-22T20:57:46Z" level=info msg="reconcile complete" controller=metrics elapsed=1.280517ms\ntime="2020-04-22T20:59:46Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-04-22T20:59:46Z" level=info msg="reconcile complete" controller=metrics elapsed=1.399228ms\ntime="2020-04-22T21:01:36Z" level=error msg="leader election lostunable to run the manager"\n
Apr 22 21:01:45.779 E ns/openshift-multus pod/multus-qmnvv node/ip-10-0-154-151.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 22 21:01:48.024 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:02:05.393 E ns/openshift-sdn pod/sdn-v7szw node/ip-10-0-144-25.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 6ms\nI0422 21:01:54.270852   80836 roundrobin.go:298] LoadBalancerRR: Removing endpoints for openshift-console-operator/metrics:https\nI0422 21:01:54.558896   80836 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-console-operator/metrics:https to [10.128.0.62:8443]\nI0422 21:01:54.559313   80836 roundrobin.go:218] Delete endpoint 10.128.0.62:8443 for service "openshift-console-operator/metrics:https"\nI0422 21:01:54.578208   80836 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:01:54.578335   80836 proxier.go:350] userspace syncProxyRules took 123.012165ms\nI0422 21:01:54.795208   80836 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:01:54.795755   80836 proxier.go:350] userspace syncProxyRules took 37.257101ms\nI0422 21:01:57.778739   80836 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.5:6443 10.129.0.71:6443 10.130.0.57:6443]\nI0422 21:01:57.778857   80836 roundrobin.go:218] Delete endpoint 10.130.0.57:6443 for service "openshift-multus/multus-admission-controller:"\nI0422 21:01:57.806530   80836 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.71:6443 10.130.0.57:6443]\nI0422 21:01:57.806576   80836 roundrobin.go:218] Delete endpoint 10.128.0.5:6443 for service "openshift-multus/multus-admission-controller:"\nI0422 21:01:57.950698   80836 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:01:57.950728   80836 proxier.go:350] userspace syncProxyRules took 35.671155ms\nI0422 21:01:58.107065   80836 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:01:58.107096   80836 proxier.go:350] userspace syncProxyRules took 30.036089ms\nI0422 21:02:04.676464   80836 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0422 21:02:04.676516   80836 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 22 21:02:26.523 E ns/openshift-sdn pod/sdn-6xbkh node/ip-10-0-143-150.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 21:02:09.746645   74780 service.go:357] Adding new service port "openshift-apiserver/api:https" at 172.30.158.22:443/TCP\nI0422 21:02:09.746658   74780 service.go:357] Adding new service port "openshift-etcd/etcd:etcd" at 172.30.28.192:2379/TCP\nI0422 21:02:09.746668   74780 service.go:357] Adding new service port "openshift-etcd/etcd:etcd-metrics" at 172.30.28.192:9979/TCP\nI0422 21:02:09.746678   74780 service.go:357] Adding new service port "openshift-machine-api/machine-api-operator:https" at 172.30.224.67:8443/TCP\nI0422 21:02:09.746876   74780 proxier.go:731] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0422 21:02:09.840309   74780 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:02:09.840329   74780 proxier.go:350] userspace syncProxyRules took 94.679079ms\nI0422 21:02:09.849580   74780 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:02:09.849602   74780 proxier.go:350] userspace syncProxyRules took 103.661738ms\nI0422 21:02:09.903321   74780 proxier.go:1552] Opened local port "nodePort for e2e-k8s-service-lb-available-1658/service-test:" (:32637/tcp)\nI0422 21:02:09.903644   74780 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:https" (:32374/tcp)\nI0422 21:02:09.903801   74780 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:http" (:32483/tcp)\nI0422 21:02:09.938762   74780 healthcheck.go:151] Opening healthcheck "openshift-ingress/router-default" on port 31375\nI0422 21:02:10.047718   74780 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0422 21:02:10.047753   74780 cmd.go:173] openshift-sdn network plugin registering startup\nI0422 21:02:10.047866   74780 cmd.go:177] openshift-sdn network plugin ready\nI0422 21:02:26.395707   74780 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0422 21:02:26.395758   74780 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 22 21:02:28.419 E ns/openshift-multus pod/multus-admission-controller-28h26 node/ip-10-0-136-212.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 22 21:02:36.659 E ns/openshift-multus pod/multus-x87vg node/ip-10-0-130-132.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 22 21:02:38.506 E ns/openshift-service-ca pod/apiservice-cabundle-injector-65b6b5584d-thpgd node/ip-10-0-144-25.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Apr 22 21:02:39.521 E ns/openshift-service-ca pod/service-serving-cert-signer-9c69cc9fb-85hft node/ip-10-0-144-25.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Apr 22 21:02:51.914 E ns/openshift-sdn pod/sdn-cnsfx node/ip-10-0-154-151.us-east-2.compute.internal container=sdn container exited with code 255 (Error): \nI0422 21:01:57.961645   61215 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:01:57.961677   61215 proxier.go:350] userspace syncProxyRules took 49.465861ms\nI0422 21:01:58.106856   61215 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:01:58.106878   61215 proxier.go:350] userspace syncProxyRules took 29.99249ms\nI0422 21:02:28.241979   61215 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:02:28.242004   61215 proxier.go:350] userspace syncProxyRules took 27.302587ms\nI0422 21:02:33.451392   61215 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.67:6443 10.129.0.71:6443 10.130.0.57:6443]\nI0422 21:02:33.451437   61215 roundrobin.go:218] Delete endpoint 10.128.0.67:6443 for service "openshift-multus/multus-admission-controller:"\nI0422 21:02:33.585533   61215 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:02:33.585559   61215 proxier.go:350] userspace syncProxyRules took 27.768454ms\nI0422 21:02:49.469789   61215 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nE0422 21:02:49.469822   61215 metrics.go:133] failed to dump OVS flows for metrics: exit status 1\nI0422 21:02:49.574894   61215 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0422 21:02:51.285279   61215 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-1658/service-test: to [10.129.2.14:80]\nI0422 21:02:51.285319   61215 roundrobin.go:218] Delete endpoint 10.128.2.15:80 for service "e2e-k8s-service-lb-available-1658/service-test:"\nI0422 21:02:51.423363   61215 proxier.go:371] userspace proxy: processing 0 service events\nI0422 21:02:51.423385   61215 proxier.go:350] userspace syncProxyRules took 27.61139ms\nF0422 21:02:51.457205   61215 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 22 21:03:24.609 E ns/openshift-multus pod/multus-6dskj node/ip-10-0-136-212.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 22 21:04:06.167 E ns/openshift-multus pod/multus-zwsdz node/ip-10-0-132-141.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 22 21:04:47.836 E ns/openshift-multus pod/multus-ct2qj node/ip-10-0-143-150.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 22 21:07:13.312 E ns/openshift-machine-config-operator pod/machine-config-daemon-p896s node/ip-10-0-130-132.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 22 21:07:18.404 E ns/openshift-machine-config-operator pod/machine-config-daemon-rpl5r node/ip-10-0-136-212.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 22 21:07:35.219 E ns/openshift-machine-config-operator pod/machine-config-daemon-gqhrb node/ip-10-0-143-150.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 22 21:07:53.613 E ns/openshift-machine-config-operator pod/machine-config-daemon-cgg78 node/ip-10-0-144-25.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 22 21:08:10.680 E ns/openshift-machine-config-operator pod/machine-config-controller-5c8d59d7c6-gskq4 node/ip-10-0-144-25.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): v1alpha1.ImageContentSourcePolicy ended with: too old resource version: 15813 (19690)\nW0422 20:53:49.209267       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 17064 (19548)\nW0422 20:53:49.566330       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 15737 (19690)\nW0422 20:53:49.566550       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ContainerRuntimeConfig ended with: too old resource version: 19539 (19681)\nW0422 20:53:49.610556       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 15852 (19681)\nI0422 20:53:50.308080       1 container_runtime_config_controller.go:712] Applied ImageConfig cluster on MachineConfigPool master\nI0422 20:53:50.432296       1 container_runtime_config_controller.go:712] Applied ImageConfig cluster on MachineConfigPool worker\nW0422 20:55:02.724579       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 21005 (21254)\nW0422 20:55:05.370894       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 21254 (21269)\nW0422 21:01:37.658866       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 27052 (27608)\nW0422 21:02:17.082487       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 27608 (28015)\n
Apr 22 21:09:43.867 E ns/openshift-machine-config-operator pod/machine-config-server-rd5hz node/ip-10-0-136-212.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0422 20:35:26.469932       1 start.go:38] Version: machine-config-daemon-4.3.14-202004200318-2-g56203fd3-dirty (56203fd320b6a22dcaa9a6b312de2f22484f9b12)\nI0422 20:35:26.470832       1 api.go:51] Launching server on :22624\nI0422 20:35:26.470930       1 api.go:51] Launching server on :22623\nI0422 20:36:33.077834       1 api.go:97] Pool worker requested by 10.0.143.92:27270\nI0422 20:36:41.722794       1 api.go:97] Pool worker requested by 10.0.143.92:29124\n
Apr 22 21:09:54.152 E ns/openshift-monitoring pod/kube-state-metrics-8cb4c78f4-hx4mf node/ip-10-0-130-132.us-east-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 22 21:09:54.288 E ns/openshift-cluster-machine-approver pod/machine-approver-7c948697c4-6mqll node/ip-10-0-132-141.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0422 20:55:28.337334       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0422 20:55:28.337530       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0422 20:55:28.337632       1 main.go:236] Starting Machine Approver\nI0422 20:55:28.442034       1 main.go:146] CSR csr-bpfvt added\nI0422 20:55:28.442174       1 main.go:149] CSR csr-bpfvt is already approved\nI0422 20:55:28.442242       1 main.go:146] CSR csr-fr5n5 added\nI0422 20:55:28.442322       1 main.go:149] CSR csr-fr5n5 is already approved\nI0422 20:55:28.442380       1 main.go:146] CSR csr-4x78c added\nI0422 20:55:28.442426       1 main.go:149] CSR csr-4x78c is already approved\nI0422 20:55:28.444144       1 main.go:146] CSR csr-6dz6t added\nI0422 20:55:28.444212       1 main.go:149] CSR csr-6dz6t is already approved\nI0422 20:55:28.444296       1 main.go:146] CSR csr-8w9kd added\nI0422 20:55:28.444343       1 main.go:149] CSR csr-8w9kd is already approved\nI0422 20:55:28.444396       1 main.go:146] CSR csr-dwzmt added\nI0422 20:55:28.444466       1 main.go:149] CSR csr-dwzmt is already approved\nI0422 20:55:28.444514       1 main.go:146] CSR csr-fkgx5 added\nI0422 20:55:28.444552       1 main.go:149] CSR csr-fkgx5 is already approved\nI0422 20:55:28.444591       1 main.go:146] CSR csr-wp74x added\nI0422 20:55:28.444661       1 main.go:149] CSR csr-wp74x is already approved\nI0422 20:55:28.444709       1 main.go:146] CSR csr-xv8pp added\nI0422 20:55:28.444751       1 main.go:149] CSR csr-xv8pp is already approved\nI0422 20:55:28.444794       1 main.go:146] CSR csr-2dfkk added\nI0422 20:55:28.444861       1 main.go:149] CSR csr-2dfkk is already approved\nI0422 20:55:28.444904       1 main.go:146] CSR csr-67k87 added\nI0422 20:55:28.444945       1 main.go:149] CSR csr-67k87 is already approved\nI0422 20:55:28.445311       1 main.go:146] CSR csr-7wrcj added\nI0422 20:55:28.445364       1 main.go:149] CSR csr-7wrcj is already approved\n
Apr 22 21:09:56.493 E ns/openshift-machine-config-operator pod/machine-config-controller-c86d49447-hhfjb node/ip-10-0-132-141.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): openshift.io/v1  } {MachineConfig  99-master-b630e01a-c7ab-4a14-87f1-4fe0ff9e32b7-registries  machineconfiguration.openshift.io/v1  } {MachineConfig  99-master-ssh  machineconfiguration.openshift.io/v1  }]\nI0422 21:09:46.371315       1 render_controller.go:516] Pool master: now targeting: rendered-master-177b6ac7d8130ae259fe171cbb3e4307\nI0422 21:09:46.373904       1 render_controller.go:516] Pool worker: now targeting: rendered-worker-3edf0862e418b5304a91fe221043ed91\nI0422 21:09:51.372031       1 node_controller.go:758] Setting node ip-10-0-132-141.us-east-2.compute.internal to desired config rendered-master-177b6ac7d8130ae259fe171cbb3e4307\nI0422 21:09:51.373849       1 node_controller.go:758] Setting node ip-10-0-130-132.us-east-2.compute.internal to desired config rendered-worker-3edf0862e418b5304a91fe221043ed91\nI0422 21:09:51.395003       1 node_controller.go:452] Pool master: node ip-10-0-132-141.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-177b6ac7d8130ae259fe171cbb3e4307\nI0422 21:09:51.395642       1 node_controller.go:452] Pool worker: node ip-10-0-130-132.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-3edf0862e418b5304a91fe221043ed91\nI0422 21:09:52.408122       1 node_controller.go:452] Pool worker: node ip-10-0-130-132.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0422 21:09:52.416252       1 node_controller.go:452] Pool master: node ip-10-0-132-141.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0422 21:09:52.424628       1 node_controller.go:433] Pool worker: node ip-10-0-130-132.us-east-2.compute.internal is now reporting unready: node ip-10-0-130-132.us-east-2.compute.internal is reporting Unschedulable\nI0422 21:09:52.444133       1 node_controller.go:433] Pool master: node ip-10-0-132-141.us-east-2.compute.internal is now reporting unready: node ip-10-0-132-141.us-east-2.compute.internal is reporting Unschedulable\n
Apr 22 21:09:56.685 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7bdf57b7-5bmfk node/ip-10-0-132-141.us-east-2.compute.internal container=operator container exited with code 255 (Error):      1 httplog.go:90] GET /metrics: (1.437399ms) 200 [Prometheus/2.14.0 10.128.2.21:36022]\nI0422 21:09:15.744073       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0422 21:09:25.754027       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0422 21:09:29.769797       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 1 items received\nI0422 21:09:35.768275       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0422 21:09:38.665839       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0422 21:09:38.665871       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0422 21:09:38.667277       1 httplog.go:90] GET /metrics: (6.168152ms) 200 [Prometheus/2.14.0 10.129.2.25:56926]\nI0422 21:09:39.525658       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0422 21:09:39.525684       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0422 21:09:39.526920       1 httplog.go:90] GET /metrics: (1.381485ms) 200 [Prometheus/2.14.0 10.128.2.21:36022]\nI0422 21:09:45.777778       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0422 21:09:51.559044       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received\nI0422 21:09:55.763254       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0422 21:09:55.763421       1 leaderelection.go:66] leaderelection lost\nF0422 21:09:55.821978       1 builder.go:217] server exited\n
Apr 22 21:10:02.875 E kube-apiserver failed contacting the API: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=30711&timeout=9m17s&timeoutSeconds=557&watch=true: dial tcp 3.23.27.15:6443: connect: connection refused
Apr 22 21:10:23.159 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Apr 22 21:10:41.414 E ns/openshift-machine-config-operator pod/machine-config-server-jl6x2 node/ip-10-0-144-25.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0422 20:35:26.456656       1 start.go:38] Version: machine-config-daemon-4.3.14-202004200318-2-g56203fd3-dirty (56203fd320b6a22dcaa9a6b312de2f22484f9b12)\nI0422 20:35:26.457831       1 api.go:51] Launching server on :22624\nI0422 20:35:26.457907       1 api.go:51] Launching server on :22623\n
Apr 22 21:10:53.016 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Apr 22 21:12:17.816 E ns/openshift-monitoring pod/node-exporter-qcr5f node/ip-10-0-130-132.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 4-22T20:55:49Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-22T20:55:49Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 22 21:12:17.830 E ns/openshift-cluster-node-tuning-operator pod/tuned-gpsrr node/ip-10-0-130-132.us-east-2.compute.internal container=tuned container exited with code 143 (Error): openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:01:26.716068   57258 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:01:26.874671   57258 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 21:02:43.734713   57258 openshift-tuned.go:550] Pod (openshift-multus/multus-x87vg) labels changed node wide: true\nI0422 21:02:46.715216   57258 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:02:46.720236   57258 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:02:46.877050   57258 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 21:06:16.187833   57258 openshift-tuned.go:550] Pod (openshift-dns/dns-default-t5xbw) labels changed node wide: true\nI0422 21:06:16.719481   57258 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:06:16.721119   57258 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:06:16.894952   57258 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 21:07:14.321290   57258 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-p896s) labels changed node wide: true\nI0422 21:07:16.714195   57258 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:07:16.715951   57258 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:07:16.842924   57258 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 21:10:02.784504   57258 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0422 21:10:02.792495   57258 openshift-tuned.go:881] Pod event watch channel closed.\nI0422 21:10:02.792519   57258 openshift-tuned.go:883] Increasing resyncPeriod to 128\n
Apr 22 21:12:17.868 E ns/openshift-sdn pod/ovs-wl465 node/ip-10-0-130-132.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): NFO|br0<->unix#513: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:09:53.704Z|00156|connmgr|INFO|br0<->unix#516: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:53.742Z|00157|bridge|INFO|bridge br0: deleted interface veth69ad8672 on port 11\n2020-04-22T21:09:53.816Z|00158|connmgr|INFO|br0<->unix#519: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:09:53.869Z|00159|connmgr|INFO|br0<->unix#522: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:53.903Z|00160|bridge|INFO|bridge br0: deleted interface veth69353d89 on port 6\n2020-04-22T21:09:53.954Z|00161|connmgr|INFO|br0<->unix#525: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:09:53.990Z|00162|connmgr|INFO|br0<->unix#528: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:54.022Z|00163|bridge|INFO|bridge br0: deleted interface vethfba65c15 on port 3\n2020-04-22T21:09:54.074Z|00164|connmgr|INFO|br0<->unix#531: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:09:54.115Z|00165|connmgr|INFO|br0<->unix#534: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:54.150Z|00166|bridge|INFO|bridge br0: deleted interface vethd1761408 on port 13\n2020-04-22T21:09:54.224Z|00167|connmgr|INFO|br0<->unix#537: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:09:54.271Z|00168|connmgr|INFO|br0<->unix#540: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:54.335Z|00169|bridge|INFO|bridge br0: deleted interface vethee8f1ffd on port 15\n2020-04-22T21:09:54.379Z|00170|connmgr|INFO|br0<->unix#543: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:09:54.419Z|00171|connmgr|INFO|br0<->unix#546: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:54.447Z|00172|bridge|INFO|bridge br0: deleted interface veth7e451366 on port 10\n2020-04-22T21:10:23.006Z|00173|connmgr|INFO|br0<->unix#570: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:10:23.033Z|00174|connmgr|INFO|br0<->unix#573: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:10:23.053Z|00175|bridge|INFO|bridge br0: deleted interface veth32477bf3 on port 16\nTerminated\n
Apr 22 21:12:17.921 E ns/openshift-multus pod/multus-2fhqj node/ip-10-0-130-132.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 22 21:12:17.956 E ns/openshift-machine-config-operator pod/machine-config-daemon-k892h node/ip-10-0-130-132.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 22 21:12:19.525 E ns/openshift-cluster-node-tuning-operator pod/tuned-dmhdt node/ip-10-0-132-141.us-east-2.compute.internal container=tuned container exited with code 143 (Error): 81   67172 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-controller-c86d49447-hhfjb) labels changed node wide: true\nI0422 21:08:07.009804   67172 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:08:07.011417   67172 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:08:07.176098   67172 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 21:09:53.348735   67172 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-2-ip-10-0-132-141.us-east-2.compute.internal) labels changed node wide: false\nI0422 21:09:53.353201   67172 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-5-ip-10-0-132-141.us-east-2.compute.internal) labels changed node wide: false\nI0422 21:09:53.376261   67172 openshift-tuned.go:550] Pod (openshift-cluster-version/version--kf7xr-4mtnf) labels changed node wide: true\nI0422 21:09:57.009826   67172 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:09:57.011195   67172 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:09:57.137886   67172 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 21:09:57.138297   67172 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-132-141.us-east-2.compute.internal) labels changed node wide: true\nI0422 21:10:02.009826   67172 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:10:02.011166   67172 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:10:02.136500   67172 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 21:10:02.318835   67172 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-132-141.us-east-2.compute.internal) labels changed node wide: true\n
Apr 22 21:12:19.543 E ns/openshift-monitoring pod/node-exporter-rrsg2 node/ip-10-0-132-141.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 4-22T20:56:50Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:50Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 22 21:12:19.595 E ns/openshift-controller-manager pod/controller-manager-76krz node/ip-10-0-132-141.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 22 21:12:19.629 E ns/openshift-sdn pod/sdn-controller-8prgx node/ip-10-0-132-141.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0422 21:00:43.237734       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 22 21:12:19.657 E ns/openshift-sdn pod/ovs-7flnz node/ip-10-0-132-141.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): unix#483: send error: Broken pipe\n2020-04-22T21:09:54.161Z|00024|reconnect|WARN|unix#483: connection dropped (Broken pipe)\n2020-04-22T21:09:54.439Z|00170|connmgr|INFO|br0<->unix#567: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:54.467Z|00171|bridge|INFO|bridge br0: deleted interface veth47e773bb on port 7\n2020-04-22T21:09:55.131Z|00172|connmgr|INFO|br0<->unix#570: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:09:55.180Z|00173|connmgr|INFO|br0<->unix#573: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:55.225Z|00174|bridge|INFO|bridge br0: deleted interface veth2696e6ce on port 16\n2020-04-22T21:09:55.363Z|00175|connmgr|INFO|br0<->unix#576: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:09:55.810Z|00025|jsonrpc|WARN|unix#509: send error: Broken pipe\n2020-04-22T21:09:55.811Z|00026|reconnect|WARN|unix#509: connection dropped (Broken pipe)\n2020-04-22T21:09:56.120Z|00027|jsonrpc|WARN|unix#513: send error: Broken pipe\n2020-04-22T21:09:56.120Z|00028|reconnect|WARN|unix#513: connection dropped (Broken pipe)\n2020-04-22T21:09:56.192Z|00029|jsonrpc|WARN|unix#516: receive error: Connection reset by peer\n2020-04-22T21:09:56.192Z|00030|reconnect|WARN|unix#516: connection dropped (Connection reset by peer)\n2020-04-22T21:09:55.428Z|00176|connmgr|INFO|br0<->unix#579: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:55.475Z|00177|bridge|INFO|bridge br0: deleted interface vetha31e1ec1 on port 12\n2020-04-22T21:09:55.715Z|00178|connmgr|INFO|br0<->unix#582: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:09:55.769Z|00179|connmgr|INFO|br0<->unix#585: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:55.829Z|00180|bridge|INFO|bridge br0: deleted interface veth925391cc on port 19\n2020-04-22T21:09:56.152Z|00181|connmgr|INFO|br0<->unix#591: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:09:56.182Z|00182|connmgr|INFO|br0<->unix#594: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:09:56.203Z|00183|bridge|INFO|bridge br0: deleted interface veth600a16e9 on port 10\nTerminated\n
Apr 22 21:12:19.704 E ns/openshift-multus pod/multus-admission-controller-tqdd6 node/ip-10-0-132-141.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 22 21:12:19.723 E ns/openshift-multus pod/multus-2bdfm node/ip-10-0-132-141.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 22 21:12:19.771 E ns/openshift-machine-config-operator pod/machine-config-daemon-4qhxt node/ip-10-0-132-141.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 22 21:12:19.789 E ns/openshift-machine-config-operator pod/machine-config-server-jbb9m node/ip-10-0-132-141.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0422 21:10:01.859681       1 start.go:38] Version: machine-config-daemon-4.3.14-202004200318-2-g56203fd3-dirty (56203fd320b6a22dcaa9a6b312de2f22484f9b12)\nI0422 21:10:01.860693       1 api.go:51] Launching server on :22624\nI0422 21:10:01.860999       1 api.go:51] Launching server on :22623\n
Apr 22 21:12:19.939 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-141.us-east-2.compute.internal node/ip-10-0-132-141.us-east-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): kube-apiserver-ip-10-0-132-141.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nW0422 21:10:02.587353       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.136.212 10.0.144.25]\nI0422 21:10:02.590621       1 healthz.go:191] [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-discovery-available ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-kubernetes-informers-synched ok\n[+]poststarthook/openshift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nhealthz check failed\nI0422 21:10:02.592253       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-132-141.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n
Apr 22 21:12:19.939 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-141.us-east-2.compute.internal node/ip-10-0-132-141.us-east-2.compute.internal container=kube-apiserver-insecure-readyz-6 container exited with code 2 (Error): I0422 20:51:57.979304       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 22 21:12:19.939 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-141.us-east-2.compute.internal node/ip-10-0-132-141.us-east-2.compute.internal container=kube-apiserver-cert-syncer-6 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0422 21:02:03.034563       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:02:03.035016       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0422 21:02:03.240850       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:02:03.241187       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 22 21:12:19.958 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-141.us-east-2.compute.internal node/ip-10-0-132-141.us-east-2.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): k8s.io/v1beta1/events?resourceVersion=31428&timeout=6m9s&timeoutSeconds=369&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0422 21:10:02.840799       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?resourceVersion=31114&timeout=8m3s&timeoutSeconds=483&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0422 21:10:02.848127       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: Get https://localhost:6443/apis/build.openshift.io/v1/builds?resourceVersion=27989&timeout=9m56s&timeoutSeconds=596&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0422 21:10:02.848178       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?resourceVersion=19555&timeout=5m39s&timeoutSeconds=339&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0422 21:10:02.848227       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: Get https://localhost:6443/apis/build.openshift.io/v1/buildconfigs?resourceVersion=27989&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0422 21:10:02.848320       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=31133&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0422 21:10:02.848514       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Apr 22 21:12:19.958 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-141.us-east-2.compute.internal node/ip-10-0-132-141.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:08:42.767282       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:08:42.767749       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:08:52.776056       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:08:52.776493       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:09:02.785640       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:09:02.786451       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:09:12.794029       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:09:12.794371       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:09:22.804233       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:09:22.804569       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:09:32.812979       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:09:32.813306       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:09:42.819353       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:09:42.820664       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:09:52.848244       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:09:52.848730       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Apr 22 21:12:19.958 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-141.us-east-2.compute.internal node/ip-10-0-132-141.us-east-2.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error): d-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1587587687" (2020-04-22 20:34:58 +0000 UTC to 2022-04-22 20:34:59 +0000 UTC (now=2020-04-22 20:53:30.550296147 +0000 UTC))\nI0422 20:53:30.550666       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1587588810" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587588810" (2020-04-22 19:53:30 +0000 UTC to 2021-04-22 19:53:30 +0000 UTC (now=2020-04-22 20:53:30.550642058 +0000 UTC))\nI0422 20:53:30.550821       1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\nI0422 20:53:30.550829       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1587588810" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587588810" (2020-04-22 19:53:30 +0000 UTC to 2021-04-22 19:53:30 +0000 UTC (now=2020-04-22 20:53:30.550810529 +0000 UTC))\nI0422 20:53:30.550864       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\nI0422 20:53:30.550902       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt\nI0422 20:53:30.551729       1 secure_serving.go:178] Serving securely on [::]:10257\nI0422 20:53:30.553079       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nI0422 20:53:30.554942       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\n
Apr 22 21:12:19.975 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-141.us-east-2.compute.internal node/ip-10-0-132-141.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): ntication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1587587684" [] issuer="kubelet-signer" (2020-04-22 20:34:43 +0000 UTC to 2020-04-23 20:16:54 +0000 UTC (now=2020-04-22 20:53:30.622645978 +0000 UTC))\nI0422 20:53:30.622680       1 tlsconfig.go:179] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-04-22 20:16:52 +0000 UTC to 2020-04-23 20:16:52 +0000 UTC (now=2020-04-22 20:53:30.622672047 +0000 UTC))\nI0422 20:53:30.622975       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1587587687" (2020-04-22 20:34:57 +0000 UTC to 2022-04-22 20:34:58 +0000 UTC (now=2020-04-22 20:53:30.622953492 +0000 UTC))\nI0422 20:53:30.623239       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1587588810" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587588809" (2020-04-22 19:53:29 +0000 UTC to 2021-04-22 19:53:29 +0000 UTC (now=2020-04-22 20:53:30.623222145 +0000 UTC))\nI0422 20:53:30.623323       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1587588810" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587588809" (2020-04-22 19:53:29 +0000 UTC to 2021-04-22 19:53:29 +0000 UTC (now=2020-04-22 20:53:30.623312101 +0000 UTC))\nI0422 20:53:30.706443       1 leaderelection.go:241] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n
Apr 22 21:12:21.901 E ns/openshift-multus pod/multus-2fhqj node/ip-10-0-130-132.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:12:23.995 E ns/openshift-monitoring pod/node-exporter-rrsg2 node/ip-10-0-132-141.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:12:24.021 E ns/openshift-multus pod/multus-2bdfm node/ip-10-0-132-141.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:12:26.144 E ns/openshift-machine-config-operator pod/machine-config-daemon-k892h node/ip-10-0-130-132.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 22 21:12:26.383 E ns/openshift-multus pod/multus-2bdfm node/ip-10-0-132-141.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:12:29.502 E ns/openshift-machine-config-operator pod/machine-config-daemon-4qhxt node/ip-10-0-132-141.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 22 21:12:40.040 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Apr 22 21:12:42.457 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-143-150.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/04/22 21:10:20 Watching directory: "/etc/alertmanager/config"\n
Apr 22 21:12:42.457 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-143-150.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/04/22 21:10:20 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/22 21:10:20 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/22 21:10:20 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/22 21:10:50 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/22 21:10:50 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/22 21:10:50 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/22 21:10:50 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/22 21:10:50 http.go:106: HTTPS: listening on [::]:9095\n
Apr 22 21:12:43.527 E ns/openshift-marketplace pod/certified-operators-7bcbf66578-7jrcj node/ip-10-0-143-150.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 22 21:12:51.963 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-6df4b74846-s58vn node/ip-10-0-136-212.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ge":"NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-132-141.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-132-141.us-east-2.compute.internal container=\"scheduler\" is not ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-22T20:55:28Z","message":"Progressing: 3 nodes are at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-22T20:37:18Z","message":"Available: 3 nodes are active; 3 nodes are at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-22T20:34:44Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0422 21:12:39.606652       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"7afaca16-8a09-4d90-835d-8db7a000aba3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-132-141.us-east-2.compute.internal\" not ready since 2020-04-22 21:12:19 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-132-141.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-132-141.us-east-2.compute.internal container=\"scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-132-141.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-132-141.us-east-2.compute.internal container=\"scheduler\" is not ready"\nI0422 21:12:50.763762       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0422 21:12:50.764165       1 leaderelection.go:66] leaderelection lost\n
Apr 22 21:12:52.964 E ns/openshift-machine-api pod/machine-api-operator-66bcfbb665-sknql node/ip-10-0-136-212.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 22 21:13:10.024 E kube-apiserver Kube API started failing: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 22 21:13:12.968 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-130-132.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-04-22T21:13:02.313Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-22T21:13:02.316Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-22T21:13:02.317Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-22T21:13:02.319Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-22T21:13:02.319Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-04-22T21:13:02.319Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-22T21:13:02.319Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-22T21:13:02.319Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-22T21:13:02.319Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-22T21:13:02.319Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-22T21:13:02.319Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-22T21:13:02.319Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-22T21:13:02.319Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-22T21:13:02.320Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-04-22T21:13:02.320Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-22T21:13:02.320Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-04-22
Apr 22 21:13:17.062 E ns/openshift-ingress-operator pod/ingress-operator-69f5cc86c8-lmbn8 node/ip-10-0-132-141.us-east-2.compute.internal container=ingress-operator container exited with code 1 (Error): 2020-04-22T21:13:15.876Z	ERROR	operator.main	ingress-operator/start.go:71	failed to create kube client	{"error": "failed to discover api rest mapper: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"}\n
Apr 22 21:14:03.024 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:14:50.474 E ns/openshift-cluster-node-tuning-operator pod/tuned-965rv node/ip-10-0-154-151.us-east-2.compute.internal container=tuned container exited with code 143 (Error): ed.go:441] Getting recommended profile...\nI0422 21:10:50.452980   85787 openshift-tuned.go:635] Active profile () != recommended profile (openshift-node)\nI0422 21:10:50.453035   85787 openshift-tuned.go:263] Starting tuned...\n2020-04-22 21:10:50,560 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-22 21:10:50,565 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-22 21:10:50,565 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-22 21:10:50,566 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-22 21:10:50,567 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-22 21:10:50,599 INFO     tuned.daemon.controller: starting controller\n2020-04-22 21:10:50,599 INFO     tuned.daemon.daemon: starting tuning\n2020-04-22 21:10:50,604 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-22 21:10:50,605 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-22 21:10:50,608 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-22 21:10:50,609 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-22 21:10:50,611 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-22 21:10:50,727 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-22 21:10:50,728 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0422 21:10:54.808524   85787 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-8frbk) labels changed node wide: false\nI0422 21:13:05.047574   85787 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0422 21:13:05.058173   85787 openshift-tuned.go:881] Pod event watch channel closed.\nI0422 21:13:05.058194   85787 openshift-tuned.go:883] Increasing resyncPeriod to 128\n
Apr 22 21:15:03.024 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:15:03.116 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Apr 22 21:15:20.207 E ns/openshift-monitoring pod/node-exporter-44x7t node/ip-10-0-143-150.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 4-22T20:56:17Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:17Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 22 21:15:20.256 E ns/openshift-sdn pod/ovs-llrvz node/ip-10-0-143-150.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): 21:12:42.792Z|00188|connmgr|INFO|br0<->unix#662: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:12:42.831Z|00189|bridge|INFO|bridge br0: deleted interface veth7a765aa5 on port 9\n2020-04-22T21:12:42.873Z|00190|connmgr|INFO|br0<->unix#665: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:12:42.918Z|00191|connmgr|INFO|br0<->unix#668: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:12:42.952Z|00192|bridge|INFO|bridge br0: deleted interface vethe594ff50 on port 20\n2020-04-22T21:12:42.994Z|00193|connmgr|INFO|br0<->unix#671: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:12:43.040Z|00194|connmgr|INFO|br0<->unix#674: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:12:43.070Z|00195|bridge|INFO|bridge br0: deleted interface veth6eda0596 on port 11\n2020-04-22T21:12:43.113Z|00196|connmgr|INFO|br0<->unix#677: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:12:43.163Z|00197|connmgr|INFO|br0<->unix#680: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:12:43.195Z|00198|bridge|INFO|bridge br0: deleted interface vethbc093265 on port 4\n2020-04-22T21:13:12.902Z|00199|connmgr|INFO|br0<->unix#704: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:13:12.929Z|00200|connmgr|INFO|br0<->unix#707: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:13:12.950Z|00201|bridge|INFO|bridge br0: deleted interface vethf2fd9bc9 on port 19\n2020-04-22T21:13:26.054Z|00202|connmgr|INFO|br0<->unix#718: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:13:26.080Z|00203|connmgr|INFO|br0<->unix#721: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:13:26.102Z|00204|bridge|INFO|bridge br0: deleted interface veth86fea81a on port 12\n2020-04-22T21:13:26.091Z|00024|jsonrpc|WARN|Dropped 9 log messages in last 659 seconds (most recently, 659 seconds ago) due to excessive rate\n2020-04-22T21:13:26.091Z|00025|jsonrpc|WARN|unix#631: receive error: Connection reset by peer\n2020-04-22T21:13:26.091Z|00026|reconnect|WARN|unix#631: connection dropped (Connection reset by peer)\nExiting ovs-vswitchd (76170).\nTerminated\n
Apr 22 21:15:20.302 E ns/openshift-multus pod/multus-lchkl node/ip-10-0-143-150.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 22 21:15:20.339 E ns/openshift-machine-config-operator pod/machine-config-daemon-l8rp7 node/ip-10-0-143-150.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 22 21:15:20.353 E ns/openshift-cluster-node-tuning-operator pod/tuned-vbzdh node/ip-10-0-143-150.us-east-2.compute.internal container=tuned container exited with code 143 (Error): :550] Pod (e2e-k8s-sig-apps-deployment-upgrade-2530/dp-657fc4b57d-dzbwx) labels changed node wide: true\nI0422 21:12:43.285708  110350 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:12:43.292660  110350 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:12:43.466403  110350 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 21:12:43.468469  110350 openshift-tuned.go:550] Pod (openshift-monitoring/alertmanager-main-2) labels changed node wide: true\nI0422 21:12:48.285689  110350 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:12:48.287258  110350 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:12:48.398240  110350 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 21:12:54.760981  110350 openshift-tuned.go:550] Pod (openshift-ingress/router-default-65585bd8fd-qb8mr) labels changed node wide: true\nI0422 21:12:58.285673  110350 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:12:58.286984  110350 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:12:58.397794  110350 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 21:13:24.749455  110350 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-9360/foo-7hcww) labels changed node wide: true\nI0422 21:13:28.285679  110350 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:13:28.287217  110350 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:13:28.396809  110350 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 21:13:34.754338  110350 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-1658/service-test-8dqbt) labels changed node wide: true\n
Apr 22 21:15:20.897 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-212.us-east-2.compute.internal node/ip-10-0-136-212.us-east-2.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): tor for resource "operators.coreos.com/v1, Resource=operatorsources": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorsources", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinehealthchecks": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinehealthchecks", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=podmonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=podmonitors"]\nI0422 21:11:00.617152       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0422 21:11:00.617327       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0422 21:11:00.617358       1 controller_utils.go:1027] Waiting for caches to sync for cluster resource quota controller\nI0422 21:11:00.617387       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0422 21:11:00.617411       1 resource_quota_monitor.go:301] QuotaMonitor running\nI0422 21:11:00.659649       1 policy_controller.go:144] Started "openshift.io/namespace-security-allocation"\nI0422 21:11:00.659808       1 controller_utils.go:1027] Waiting for caches to sync for namespace-security-allocation-controller controller\nI0422 21:11:00.697988       1 policy_controller.go:144] Started "openshift.io/resourcequota"\nI0422 21:11:00.698320       1 policy_controller.go:147] Started Origin Controllers\nI0422 21:11:00.698703       1 resource_quota_controller.go:276] Starting resource quota controller\nI0422 21:11:00.698737       1 controller_utils.go:1027] Waiting for caches to sync for resource quota controller\nI0422 21:11:00.766367       1 controller_utils.go:1034] Caches are synced for namespace-security-allocation-controller controller\nI0422 21:11:00.802618       1 controller_utils.go:1034] Caches are synced for resource quota controller\nI0422 21:11:01.417611       1 controller_utils.go:1034] Caches are synced for cluster resource quota controller\n
Apr 22 21:15:20.897 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-212.us-east-2.compute.internal node/ip-10-0-136-212.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:11:47.962941       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:11:47.963375       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:11:57.974500       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:11:57.974936       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:12:07.987464       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:12:07.987797       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:12:17.997603       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:12:17.997961       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:12:28.007312       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:12:28.007741       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:12:38.016993       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:12:38.017428       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:12:48.038529       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:12:48.038980       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:12:58.063857       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:12:58.064376       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Apr 22 21:15:20.897 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-136-212.us-east-2.compute.internal node/ip-10-0-136-212.us-east-2.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error): or, name: cluster-network-operator, uid: d1950060-c498-4648-8bfe-e40f27816015] with propagation policy Background\nI0422 21:12:58.295530       1 garbagecollector.go:405] processing item [v1/ConfigMap, namespace: openshift-cluster-storage-operator, name: cluster-storage-operator-lock, uid: 748fe37c-c45d-4d87-b4bc-5bd1e895a970]\nI0422 21:12:58.305799       1 garbagecollector.go:518] delete object [v1/ConfigMap, namespace: openshift-cluster-storage-operator, name: cluster-storage-operator-lock, uid: 748fe37c-c45d-4d87-b4bc-5bd1e895a970] with propagation policy Background\nW0422 21:13:00.370863       1 reflector.go:299] k8s.io/client-go/metadata/metadatainformer/informer.go:89: watch of *v1.PartialObjectMetadata ended with: too old resource version: 34034 (34089)\nI0422 21:13:02.197711       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-7fc4679f44, need 3, creating 1\nI0422 21:13:02.213917       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-7fc4679f44", UID:"24b27a06-dff2-4745-bd86-0fc2e4ac27ce", APIVersion:"apps/v1", ResourceVersion:"34004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-7fc4679f44-8lt6j\nI0422 21:13:04.398718       1 endpoints_controller.go:340] Error syncing endpoints for service "openshift-etcd/etcd", retrying. Error: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0422 21:13:04.399122       1 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openshift-etcd", Name:"etcd", UID:"4fed4143-1884-4935-ba0d-15a9800dd2bc", APIVersion:"v1", ResourceVersion:"32703", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\n
Apr 22 21:15:20.954 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-136-212.us-east-2.compute.internal node/ip-10-0-136-212.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): Ephemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0422 21:12:52.552853       1 scheduler.go:667] pod openshift-network-operator/network-operator-6fd89b559-wxjbx is bound successfully on node "ip-10-0-132-141.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0422 21:12:55.026838       1 scheduler.go:667] pod openshift-monitoring/prometheus-k8s-0 is bound successfully on node "ip-10-0-130-132.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0422 21:12:55.062133       1 scheduler.go:667] pod openshift-monitoring/alertmanager-main-0 is bound successfully on node "ip-10-0-130-132.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0422 21:13:02.220108       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7fc4679f44-8lt6j: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0422 21:13:04.029997       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7fc4679f44-8lt6j: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Apr 22 21:15:21.016 E ns/openshift-monitoring pod/node-exporter-m5694 node/ip-10-0-136-212.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 4-22T20:57:00Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-22T20:57:00Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 22 21:15:21.081 E ns/openshift-controller-manager pod/controller-manager-7nvz5 node/ip-10-0-136-212.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 22 21:15:21.107 E ns/openshift-sdn pod/sdn-controller-zv42x node/ip-10-0-136-212.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0422 21:00:49.770157       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 22 21:15:21.135 E ns/openshift-sdn pod/ovs-mwp5x node/ip-10-0-136-212.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): nmgr|INFO|br0<->unix#736: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:12:50.447Z|00220|connmgr|INFO|br0<->unix#739: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:12:50.497Z|00221|bridge|INFO|bridge br0: deleted interface veth64ad774d on port 7\n2020-04-22T21:12:51.203Z|00222|connmgr|INFO|br0<->unix#743: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:12:51.234Z|00223|connmgr|INFO|br0<->unix#746: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:12:51.266Z|00224|bridge|INFO|bridge br0: deleted interface vethd74fd4eb on port 17\n2020-04-22T21:12:51.651Z|00225|connmgr|INFO|br0<->unix#749: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:12:51.689Z|00226|connmgr|INFO|br0<->unix#752: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:12:51.723Z|00227|bridge|INFO|bridge br0: deleted interface veth0a2878ee on port 25\n2020-04-22T21:12:52.661Z|00228|connmgr|INFO|br0<->unix#755: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:12:52.700Z|00229|connmgr|INFO|br0<->unix#758: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:12:52.733Z|00230|bridge|INFO|bridge br0: deleted interface veth554cdacb on port 10\n2020-04-22T21:12:52.924Z|00231|bridge|INFO|bridge br0: added interface veth1676eabe on port 26\n2020-04-22T21:12:52.969Z|00232|connmgr|INFO|br0<->unix#761: 5 flow_mods in the last 0 s (5 adds)\n2020-04-22T21:12:53.034Z|00233|connmgr|INFO|br0<->unix#765: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:12:53.037Z|00234|connmgr|INFO|br0<->unix#767: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-22T21:12:56.048Z|00235|connmgr|INFO|br0<->unix#773: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:12:56.090Z|00236|connmgr|INFO|br0<->unix#776: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:12:56.119Z|00237|bridge|INFO|bridge br0: deleted interface veth1676eabe on port 26\n2020-04-22T21:12:56.110Z|00035|reconnect|WARN|unix#671: connection dropped (Connection reset by peer)\n2020-04-22T21:13:04Z|00001|fatal_signal|WARN|terminating with signal 15 (Terminated)\nTerminated\n
Apr 22 21:15:21.175 E ns/openshift-multus pod/multus-admission-controller-s44cr node/ip-10-0-136-212.us-east-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Apr 22 21:15:21.210 E ns/openshift-multus pod/multus-wsjqw node/ip-10-0-136-212.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 22 21:15:21.263 E ns/openshift-machine-config-operator pod/machine-config-server-xp58t node/ip-10-0-136-212.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0422 21:09:48.224538       1 start.go:38] Version: machine-config-daemon-4.3.14-202004200318-2-g56203fd3-dirty (56203fd320b6a22dcaa9a6b312de2f22484f9b12)\nI0422 21:09:48.225542       1 api.go:51] Launching server on :22624\nI0422 21:09:48.225609       1 api.go:51] Launching server on :22623\n
Apr 22 21:15:21.291 E ns/openshift-machine-config-operator pod/machine-config-daemon-f4nsj node/ip-10-0-136-212.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 22 21:15:21.331 E ns/openshift-cluster-node-tuning-operator pod/tuned-n9fmm node/ip-10-0-136-212.us-east-2.compute.internal container=tuned container exited with code 143 (Error): revision-pruner-3-ip-10-0-136-212.us-east-2.compute.internal) labels changed node wide: false\nI0422 21:12:50.133953   97365 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-4-ip-10-0-136-212.us-east-2.compute.internal) labels changed node wide: false\nI0422 21:12:50.313326   97365 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-5-ip-10-0-136-212.us-east-2.compute.internal) labels changed node wide: false\nI0422 21:12:50.518430   97365 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-136-212.us-east-2.compute.internal) labels changed node wide: false\nI0422 21:12:50.949700   97365 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-4-ip-10-0-136-212.us-east-2.compute.internal) labels changed node wide: false\nI0422 21:12:51.124343   97365 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-5-ip-10-0-136-212.us-east-2.compute.internal) labels changed node wide: true\nI0422 21:12:55.682353   97365 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:12:55.683889   97365 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:12:55.808973   97365 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 21:12:57.427430   97365 openshift-tuned.go:550] Pod (openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-6df4b74846-s58vn) labels changed node wide: true\nI0422 21:13:00.687460   97365 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:13:00.693325   97365 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:13:00.969575   97365 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 21:13:04.025536   97365 openshift-tuned.go:550] Pod (openshift-machine-config-operator/etcd-quorum-guard-7fc4679f44-9kj66) labels changed node wide: true\n
Apr 22 21:15:22.867 E ns/openshift-multus pod/multus-lchkl node/ip-10-0-143-150.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:15:24.770 E ns/openshift-multus pod/multus-lchkl node/ip-10-0-143-150.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:15:24.955 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-212.us-east-2.compute.internal node/ip-10-0-136-212.us-east-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): red revision has been compacted\nE0422 21:13:04.408957       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.408991       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.409091       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.409135       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.409272       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.409651       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.409711       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.409798       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.409845       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.409874       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.408523       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.410063       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:13:04.410093       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0422 21:13:04.680307       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-136-212.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0422 21:13:04.680530       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Apr 22 21:15:24.955 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-212.us-east-2.compute.internal node/ip-10-0-136-212.us-east-2.compute.internal container=kube-apiserver-insecure-readyz-6 container exited with code 2 (Error): I0422 20:53:50.023852       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 22 21:15:24.955 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-136-212.us-east-2.compute.internal node/ip-10-0-136-212.us-east-2.compute.internal container=kube-apiserver-cert-syncer-6 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0422 21:03:56.805201       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:03:56.805611       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0422 21:03:57.011842       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:03:57.012166       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 22 21:15:26.268 E ns/openshift-multus pod/multus-wsjqw node/ip-10-0-136-212.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:15:28.443 E ns/openshift-machine-config-operator pod/machine-config-daemon-l8rp7 node/ip-10-0-143-150.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 22 21:15:28.730 E ns/openshift-multus pod/multus-wsjqw node/ip-10-0-136-212.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:15:31.741 E ns/openshift-multus pod/multus-wsjqw node/ip-10-0-136-212.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:15:32.763 E ns/openshift-machine-config-operator pod/machine-config-daemon-f4nsj node/ip-10-0-136-212.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 22 21:15:36.662 E ns/openshift-monitoring pod/prometheus-adapter-5b55699468-rsxdm node/ip-10-0-154-151.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0422 20:56:11.412589       1 adapter.go:93] successfully using in-cluster auth\nI0422 20:56:11.709123       1 secure_serving.go:116] Serving securely on [::]:6443\nW0422 21:13:05.055269       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:133: Unexpected watch close - watch lasted less than a second and no items received\n
Apr 22 21:15:40.904 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Apr 22 21:15:50.319 E ns/openshift-authentication-operator pod/authentication-operator-6678c9c8fc-xcqff node/ip-10-0-144-25.us-east-2.compute.internal container=operator container exited with code 255 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-authentication-operator_authentication-operator-6678c9c8fc-xcqff_f06d0102-258e-4a13-9f69-3cd0c28ac54a/operator/0.log": lstat /var/log/pods/openshift-authentication-operator_authentication-operator-6678c9c8fc-xcqff_f06d0102-258e-4a13-9f69-3cd0c28ac54a/operator/0.log: no such file or directory
Apr 22 21:15:50.465 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-143-150.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-04-22T21:15:48.078Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-22T21:15:48.082Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-22T21:15:48.083Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-22T21:15:48.084Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-22T21:15:48.084Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-04-22T21:15:48.084Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-22T21:15:48.084Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-22T21:15:48.084Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-22T21:15:48.084Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-22T21:15:48.084Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-22T21:15:48.084Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-22T21:15:48.084Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-04-22T21:15:48.084Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-22T21:15:48.084Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-22T21:15:48.097Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-22T21:15:48.097Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-04-22
Apr 22 21:15:53.096 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-779558585f-dfz9n node/ip-10-0-144-25.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): s/kube-controller-manager-ip-10-0-136-212.us-east-2.compute.internal container=\"kube-controller-manager-6\" is not ready\nNodeControllerDegraded: The master nodes not ready: node \"ip-10-0-136-212.us-east-2.compute.internal\" not ready since 2020-04-22 21:15:20 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "StaticPodsDegraded: nodes/ip-10-0-136-212.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-136-212.us-east-2.compute.internal container=\"cluster-policy-controller-6\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-136-212.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-136-212.us-east-2.compute.internal container=\"kube-controller-manager-6\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0422 21:15:47.539396       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"0677cfa5-18ee-44ba-82df-3f4166b66c29", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-136-212.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-136-212.us-east-2.compute.internal container=\"cluster-policy-controller-6\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-136-212.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-136-212.us-east-2.compute.internal container=\"kube-controller-manager-6\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nI0422 21:15:50.233711       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0422 21:15:50.233915       1 leaderelection.go:66] leaderelection lost\nF0422 21:15:50.264499       1 builder.go:217] server exited\n
Apr 22 21:15:54.993 E ns/openshift-service-ca pod/configmap-cabundle-injector-5d8b89f546-spf9j node/ip-10-0-144-25.us-east-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Apr 22 21:15:56.562 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7bdf57b7-sqpv6 node/ip-10-0-144-25.us-east-2.compute.internal container=operator container exited with code 255 (Error): tor/openshift-cluster-svcat-apiserver-operator-lock\nI0422 21:15:19.692929       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0422 21:15:23.672172       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0422 21:15:23.672212       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0422 21:15:23.682525       1 httplog.go:90] GET /metrics: (43.436472ms) 200 [Prometheus/2.14.0 10.131.0.19:34252]\nI0422 21:15:29.705217       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0422 21:15:33.260766       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0422 21:15:33.260806       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0422 21:15:33.263279       1 httplog.go:90] GET /metrics: (2.656927ms) 200 [Prometheus/2.14.0 10.128.2.21:59678]\nI0422 21:15:39.718079       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0422 21:15:49.744988       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0422 21:15:53.652425       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0422 21:15:53.652467       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0422 21:15:53.656073       1 httplog.go:90] GET /metrics: (13.139185ms) 200 [Prometheus/2.14.0 10.131.0.19:34252]\nI0422 21:15:54.176909       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0422 21:15:54.179638       1 leaderelection.go:66] leaderelection lost\n
Apr 22 21:16:08.871 E kube-apiserver failed contacting the API: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37261&timeout=7m27s&timeoutSeconds=447&watch=true: dial tcp 3.23.27.15:6443: connect: connection refused
Apr 22 21:16:08.872 E kube-apiserver failed contacting the API: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=37212&timeout=6m46s&timeoutSeconds=406&watch=true: dial tcp 3.23.27.15:6443: connect: connection refused
Apr 22 21:16:09.110 E kube-apiserver Kube API started failing: Get https://api.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: dial tcp 3.22.9.132:6443: connect: connection refused
Apr 22 21:16:17.396 E ns/openshift-monitoring pod/prometheus-operator-5d559d454d-f5lbq node/ip-10-0-136-212.us-east-2.compute.internal container=prometheus-operator container exited with code 1 (Error): ts=2020-04-22T21:15:57.121876442Z caller=main.go:199 msg="Starting Prometheus Operator version '0.34.0'."\nts=2020-04-22T21:15:57.215691944Z caller=main.go:96 msg="Staring insecure server on :8080"\nlevel=info ts=2020-04-22T21:15:57.242928368Z caller=operator.go:441 component=prometheusoperator msg="connection established" cluster-version=v1.16.2\nlevel=info ts=2020-04-22T21:15:57.247173836Z caller=operator.go:219 component=alertmanageroperator msg="connection established" cluster-version=v1.16.2\nlevel=info ts=2020-04-22T21:15:58.147504993Z caller=operator.go:641 component=alertmanageroperator msg="CRD updated" crd=Alertmanager\nlevel=info ts=2020-04-22T21:15:58.191735556Z caller=operator.go:1870 component=prometheusoperator msg="CRD updated" crd=Prometheus\nlevel=info ts=2020-04-22T21:15:58.228828601Z caller=operator.go:1870 component=prometheusoperator msg="CRD updated" crd=ServiceMonitor\nlevel=info ts=2020-04-22T21:15:58.257821842Z caller=operator.go:1870 component=prometheusoperator msg="CRD updated" crd=PodMonitor\nlevel=info ts=2020-04-22T21:15:58.304599146Z caller=operator.go:1870 component=prometheusoperator msg="CRD updated" crd=PrometheusRule\nlevel=info ts=2020-04-22T21:16:01.345867944Z caller=operator.go:235 component=alertmanageroperator msg="CRD API endpoints ready"\nlevel=info ts=2020-04-22T21:16:01.547802616Z caller=operator.go:190 component=alertmanageroperator msg="successfully synced all caches"\nlevel=info ts=2020-04-22T21:16:01.549378915Z caller=operator.go:462 component=alertmanageroperator msg="sync alertmanager" key=openshift-monitoring/main\nlevel=info ts=2020-04-22T21:16:01.754258975Z caller=operator.go:462 component=alertmanageroperator msg="sync alertmanager" key=openshift-monitoring/main\nts=2020-04-22T21:16:09.938701093Z caller=main.go:315 msg="Unhandled error received. Exiting..." err="creating CRDs failed: waiting for PodMonitor crd failed: timed out waiting for Custom Resource: failed to list CRD: rpc error: code = Unavailable desc = etcdserver: leader changed"\n
Apr 22 21:16:18.024 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:16:19.899 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-141.us-east-2.compute.internal node/ip-10-0-132-141.us-east-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): 6759f: packageserver-8654c4649, packageserver-86777b76f4, packageserver-58c7cc6d5, packageserver-5dd6d6759f, packageserver-7fdd4957c6, packageserver-dd84fc8f7, packageserver-77967b86db\nI0422 21:16:17.045086       1 controller_utils.go:602] Controller packageserver-5dd6d6759f deleting pod openshift-operator-lifecycle-manager/packageserver-5dd6d6759f-r4jq5\nI0422 21:16:17.077283       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-5dd6d6759f", UID:"e8a1e7cf-77a4-457e-9f8b-eda78f16674a", APIVersion:"apps/v1", ResourceVersion:"37303", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-5dd6d6759f-r4jq5\nI0422 21:16:17.133239       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\nI0422 21:16:18.763831       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-marketplace/community-operators-6687dbfcd9, need 1, creating 1\nI0422 21:16:18.764270       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-marketplace", Name:"community-operators", UID:"cb35282f-9727-48da-b4c3-1dec7d8c60e6", APIVersion:"apps/v1", ResourceVersion:"37356", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set community-operators-6687dbfcd9 to 1\nI0422 21:16:18.782788       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/community-operators: Operation cannot be fulfilled on deployments.apps "community-operators": the object has been modified; please apply your changes to the latest version and try again\nI0422 21:16:18.825272       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0422 21:16:18.825419       1 controllermanager.go:291] leaderelection lost\n
Apr 22 21:16:21.444 E ns/openshift-monitoring pod/prometheus-operator-5d559d454d-f5lbq node/ip-10-0-136-212.us-east-2.compute.internal container=prometheus-operator container exited with code 1 (Error): ts=2020-04-22T21:16:19.884880604Z caller=main.go:199 msg="Starting Prometheus Operator version '0.34.0'."\nts=2020-04-22T21:16:19.956922645Z caller=main.go:96 msg="Staring insecure server on :8080"\nts=2020-04-22T21:16:19.969041103Z caller=main.go:315 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://172.30.0.1:443/version?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Apr 22 21:17:48.024 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 22 21:18:03.421 E ns/openshift-monitoring pod/node-exporter-6pw2r node/ip-10-0-154-151.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 4-22T20:56:43Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-22T20:56:43Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 22 21:18:03.510 E ns/openshift-multus pod/multus-lpfxz node/ip-10-0-154-151.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 22 21:18:03.535 E ns/openshift-sdn pod/ovs-82kq8 node/ip-10-0-154-151.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): ods in the last 0 s (2 deletes)\n2020-04-22T21:15:36.385Z|00144|connmgr|INFO|br0<->unix#683: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:36.409Z|00145|bridge|INFO|bridge br0: deleted interface veth283ecb48 on port 15\n2020-04-22T21:15:36.451Z|00146|connmgr|INFO|br0<->unix#686: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:36.496Z|00147|connmgr|INFO|br0<->unix#689: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:36.520Z|00148|bridge|INFO|bridge br0: deleted interface veth9d3d6dce on port 16\n2020-04-22T21:15:36.563Z|00149|connmgr|INFO|br0<->unix#692: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:36.610Z|00150|connmgr|INFO|br0<->unix#695: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:36.639Z|00151|bridge|INFO|bridge br0: deleted interface vethd5292d82 on port 14\n2020-04-22T21:15:36.680Z|00152|connmgr|INFO|br0<->unix#698: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:36.717Z|00153|connmgr|INFO|br0<->unix#701: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:36.748Z|00154|bridge|INFO|bridge br0: deleted interface vethbd311622 on port 6\n2020-04-22T21:15:36.827Z|00155|connmgr|INFO|br0<->unix#704: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:36.866Z|00156|connmgr|INFO|br0<->unix#707: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:36.894Z|00157|bridge|INFO|bridge br0: deleted interface veth281e7c50 on port 7\n2020-04-22T21:15:36.957Z|00158|connmgr|INFO|br0<->unix#710: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:37.010Z|00159|connmgr|INFO|br0<->unix#713: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:37.044Z|00160|bridge|INFO|bridge br0: deleted interface veth0ecc449f on port 8\n2020-04-22T21:16:06.123Z|00161|connmgr|INFO|br0<->unix#737: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:16:06.150Z|00162|connmgr|INFO|br0<->unix#740: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:16:06.171Z|00163|bridge|INFO|bridge br0: deleted interface vethee95b192 on port 9\nExiting ovs-vswitchd (67023).\nTerminated\n
Apr 22 21:18:03.585 E ns/openshift-machine-config-operator pod/machine-config-daemon-g4wb4 node/ip-10-0-154-151.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 22 21:18:03.600 E ns/openshift-cluster-node-tuning-operator pod/tuned-x4qlb node/ip-10-0-154-151.us-east-2.compute.internal container=tuned container exited with code 143 (Error): 30,544 INFO     tuned.daemon.daemon: starting tuning\n2020-04-22 21:15:30,549 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-22 21:15:30,550 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-22 21:15:30,553 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-22 21:15:30,554 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-22 21:15:30,556 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-22 21:15:30,668 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-22 21:15:30,670 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0422 21:15:37.762730   97389 openshift-tuned.go:550] Pod (openshift-console/downloads-597bf76456-nb2cp) labels changed node wide: true\nI0422 21:15:40.203176   97389 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:15:40.204625   97389 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:15:40.318633   97389 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 21:15:44.813059   97389 openshift-tuned.go:550] Pod (openshift-monitoring/alertmanager-main-1) labels changed node wide: true\nI0422 21:15:45.203162   97389 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:15:45.204563   97389 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:15:45.317540   97389 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0422 21:16:08.802247   97389 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0422 21:16:08.809800   97389 openshift-tuned.go:881] Pod event watch channel closed.\nI0422 21:16:08.809829   97389 openshift-tuned.go:883] Increasing resyncPeriod to 128\n
Apr 22 21:18:07.431 E ns/openshift-multus pod/multus-lpfxz node/ip-10-0-154-151.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:18:13.695 E ns/openshift-machine-config-operator pod/machine-config-daemon-g4wb4 node/ip-10-0-154-151.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 22 21:18:20.769 E ns/openshift-marketplace pod/redhat-operators-677c4fdb47-cvvzq node/ip-10-0-130-132.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 22 21:18:25.829 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-144-25.us-east-2.compute.internal node/ip-10-0-144-25.us-east-2.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): I0422 20:52:27.381528       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0422 20:52:27.384551       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0422 20:52:27.385398       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Apr 22 21:18:25.829 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-144-25.us-east-2.compute.internal node/ip-10-0-144-25.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:15:24.730446       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:15:24.730883       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:15:34.740582       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:15:34.741162       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:15:44.751066       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:15:44.751637       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:15:54.765403       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:15:54.765938       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:15:55.102178       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:15:55.102770       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:15:55.143231       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:15:55.143703       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:15:55.144463       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:15:55.144847       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0422 21:16:04.775065       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:16:04.775486       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Apr 22 21:18:25.829 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-144-25.us-east-2.compute.internal node/ip-10-0-144-25.us-east-2.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error): validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1587587687" (2020-04-22 20:34:58 +0000 UTC to 2022-04-22 20:34:59 +0000 UTC (now=2020-04-22 20:55:59.118496474 +0000 UTC))\nI0422 20:55:59.117564       1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\nI0422 20:55:59.117572       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt\nI0422 20:55:59.117609       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\nI0422 20:55:59.119439       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1587588959" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587588958" (2020-04-22 19:55:57 +0000 UTC to 2021-04-22 19:55:57 +0000 UTC (now=2020-04-22 20:55:59.119410801 +0000 UTC))\nI0422 20:55:59.119627       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1587588959" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587588958" (2020-04-22 19:55:57 +0000 UTC to 2021-04-22 19:55:57 +0000 UTC (now=2020-04-22 20:55:59.119600118 +0000 UTC))\nI0422 20:55:59.119699       1 secure_serving.go:178] Serving securely on [::]:10257\nI0422 20:55:59.119778       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0422 20:55:59.127614       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nE0422 21:13:05.784038       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: rpc error: code = Unavailable desc = etcdserver: leader changed\n
Apr 22 21:18:25.862 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-144-25.us-east-2.compute.internal node/ip-10-0-144-25.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): 50>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0422 21:15:55.634282       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/olm-operator-56dcdbf879-qjgzz is bound successfully on node "ip-10-0-136-212.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0422 21:15:56.880436       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7fc4679f44-ccqkm: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0422 21:15:59.872291       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7fc4679f44-ccqkm: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0422 21:16:04.348013       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-8654c4649-xx9tv is bound successfully on node "ip-10-0-132-141.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0422 21:16:04.874452       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7fc4679f44-ccqkm: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Apr 22 21:18:25.933 E ns/openshift-monitoring pod/node-exporter-kskxj node/ip-10-0-144-25.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 4-22T20:57:16Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-22T20:57:16Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 22 21:18:25.945 E ns/openshift-controller-manager pod/controller-manager-bk2g8 node/ip-10-0-144-25.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 22 21:18:25.955 E ns/openshift-sdn pod/sdn-controller-r2z7g node/ip-10-0-144-25.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): ll:0x0, ext:63723184243, loc:(*time.Location)(0x2b7dcc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-144-25\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-22T20:30:43Z\",\"renewTime\":\"2020-04-22T21:00:35Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-144-25 became leader'\nI0422 21:00:35.835908       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0422 21:00:35.841309       1 master.go:51] Initializing SDN master\nI0422 21:00:35.928833       1 network_controller.go:60] Started OpenShift Network Controller\nW0422 21:10:03.557776       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 26834 (31434)\nE0422 21:13:05.083424       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: Get https://api-int.ci-op-1isycq45-f3191.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=20554&timeout=9m32s&timeoutSeconds=572&watch=true: dial tcp 10.0.149.247:6443: connect: connection refused\nW0422 21:13:05.326681       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 26828 (27083)\nW0422 21:13:06.191472       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 20554 (31434)\n
Apr 22 21:18:25.968 E ns/openshift-multus pod/multus-fg8pk node/ip-10-0-144-25.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 22 21:18:25.994 E ns/openshift-multus pod/multus-admission-controller-5lj9s node/ip-10-0-144-25.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 22 21:18:26.034 E ns/openshift-sdn pod/ovs-kfd2k node/ip-10-0-144-25.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): interface veth6be84d2b on port 33\n2020-04-22T21:15:55.168Z|00282|connmgr|INFO|br0<->unix#945: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:55.234Z|00283|connmgr|INFO|br0<->unix#948: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:55.343Z|00284|bridge|INFO|bridge br0: deleted interface vethc37cab7f on port 31\n2020-04-22T21:15:55.419Z|00285|connmgr|INFO|br0<->unix#951: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:55.649Z|00286|connmgr|INFO|br0<->unix#954: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:55.702Z|00287|bridge|INFO|bridge br0: deleted interface veth3dc3488e on port 7\n2020-04-22T21:15:55.858Z|00288|connmgr|INFO|br0<->unix#957: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:55.688Z|00036|reconnect|WARN|unix#842: connection dropped (Broken pipe)\n2020-04-22T21:15:55.944Z|00289|connmgr|INFO|br0<->unix#960: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:56.047Z|00290|bridge|INFO|bridge br0: deleted interface vethb0129734 on port 32\n2020-04-22T21:15:56.763Z|00291|connmgr|INFO|br0<->unix#965: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:56.816Z|00292|connmgr|INFO|br0<->unix#968: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:56.861Z|00293|bridge|INFO|bridge br0: deleted interface vethbad380b8 on port 23\n2020-04-22T21:15:57.359Z|00294|bridge|INFO|bridge br0: added interface veth6cb93e4a on port 35\n2020-04-22T21:15:57.446Z|00295|connmgr|INFO|br0<->unix#971: 5 flow_mods in the last 0 s (5 adds)\n2020-04-22T21:15:57.556Z|00296|connmgr|INFO|br0<->unix#975: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-22T21:15:57.558Z|00297|connmgr|INFO|br0<->unix#977: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:59.688Z|00298|connmgr|INFO|br0<->unix#981: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-22T21:15:59.719Z|00299|connmgr|INFO|br0<->unix#984: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-22T21:15:59.745Z|00300|bridge|INFO|bridge br0: deleted interface veth6cb93e4a on port 35\nExiting ovs-vswitchd (83281).\nTerminated\n
Apr 22 21:18:26.064 E ns/openshift-machine-config-operator pod/machine-config-daemon-j75lq node/ip-10-0-144-25.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 22 21:18:26.077 E ns/openshift-machine-config-operator pod/machine-config-server-5gp87 node/ip-10-0-144-25.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0422 21:10:45.185090       1 start.go:38] Version: machine-config-daemon-4.3.14-202004200318-2-g56203fd3-dirty (56203fd320b6a22dcaa9a6b312de2f22484f9b12)\nI0422 21:10:45.186259       1 api.go:51] Launching server on :22624\nI0422 21:10:45.186319       1 api.go:51] Launching server on :22623\n
Apr 22 21:18:26.091 E ns/openshift-cluster-node-tuning-operator pod/tuned-xm7c6 node/ip-10-0-144-25.us-east-2.compute.internal container=tuned container exited with code 143 (Error): :15:52.108160  113661 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-2-ip-10-0-144-25.us-east-2.compute.internal) labels changed node wide: false\nI0422 21:15:52.111574  113661 openshift-tuned.go:550] Pod (openshift-controller-manager-operator/openshift-controller-manager-operator-6948759cb9-5sz5g) labels changed node wide: true\nI0422 21:15:55.850366  113661 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:15:55.859167  113661 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:15:56.722382  113661 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 21:15:57.885578  113661 openshift-tuned.go:550] Pod (openshift-service-ca/service-serving-cert-signer-9c69cc9fb-85hft) labels changed node wide: true\nI0422 21:16:00.847879  113661 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:16:00.849599  113661 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:16:00.997606  113661 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0422 21:16:03.654853  113661 openshift-tuned.go:550] Pod (openshift-insights/insights-operator-84fd7dc5fb-8wktf) labels changed node wide: true\nI0422 21:16:05.848272  113661 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0422 21:16:05.849814  113661 openshift-tuned.go:441] Getting recommended profile...\nI0422 21:16:05.990475  113661 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n2020-04-22 21:16:08,473 INFO     tuned.daemon.controller: terminating controller\n2020-04-22 21:16:08,473 INFO     tuned.daemon.daemon: stopping tuning\nI0422 21:16:08.474314  113661 openshift-tuned.go:137] Received signal: terminated\nI0422 21:16:08.474373  113661 openshift-tuned.go:304] Sending TERM to PID 113943\n
Apr 22 21:18:30.061 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-144-25.us-east-2.compute.internal node/ip-10-0-144-25.us-east-2.compute.internal container=kube-apiserver-6 container exited with code 1 (Error): mpacted\nE0422 21:16:08.341898       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342091       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342189       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342223       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342251       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342325       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342426       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342465       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342494       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342511       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342543       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.342773       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0422 21:16:08.346187       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}\nI0422 21:16:08.450168       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0422 21:16:08.450172       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-144-25.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\n
Apr 22 21:18:30.061 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-144-25.us-east-2.compute.internal node/ip-10-0-144-25.us-east-2.compute.internal container=kube-apiserver-insecure-readyz-6 container exited with code 2 (Error): I0422 20:55:47.725578       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 22 21:18:30.061 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-144-25.us-east-2.compute.internal node/ip-10-0-144-25.us-east-2.compute.internal container=kube-apiserver-cert-syncer-6 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0422 21:15:54.684660       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:15:54.699755       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0422 21:15:54.962406       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0422 21:15:54.962810       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 22 21:18:30.830 E ns/openshift-marketplace pod/certified-operators-7bcbf66578-5mxcq node/ip-10-0-130-132.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 22 21:18:33.115 E ns/openshift-multus pod/multus-fg8pk node/ip-10-0-144-25.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:18:35.817 E ns/openshift-marketplace pod/community-operators-677d564d64-n7v85 node/ip-10-0-130-132.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 22 21:18:36.162 E ns/openshift-multus pod/multus-fg8pk node/ip-10-0-144-25.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 22 21:18:41.201 E ns/openshift-machine-config-operator pod/machine-config-daemon-j75lq node/ip-10-0-144-25.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error):