ResultSUCCESS
Tests 4 failed / 26 succeeded
Started2020-09-23 12:07
Elapsed1h25m
Work namespaceci-op-k1p58lb0
Refs release-4.3:ccb80d3c
960:e202d44d
pod5d02b4fe-fd95-11ea-85e2-0a580a800da0
repoopenshift/cluster-kube-apiserver-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 36m12s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 7s of 32m0s (0%):

Sep 23 12:57:16.403 E ns/e2e-k8s-service-lb-available-240 svc/service-test Service stopped responding to GET requests over new connections
Sep 23 12:57:17.402 - 999ms E ns/e2e-k8s-service-lb-available-240 svc/service-test Service is not responding to GET requests over new connections
Sep 23 12:57:18.742 I ns/e2e-k8s-service-lb-available-240 svc/service-test Service started responding to GET requests over new connections
Sep 23 12:57:48.403 E ns/e2e-k8s-service-lb-available-240 svc/service-test Service stopped responding to GET requests on reused connections
Sep 23 12:57:49.402 E ns/e2e-k8s-service-lb-available-240 svc/service-test Service is not responding to GET requests on reused connections
Sep 23 12:57:50.227 I ns/e2e-k8s-service-lb-available-240 svc/service-test Service started responding to GET requests on reused connections
Sep 23 12:59:00.403 E ns/e2e-k8s-service-lb-available-240 svc/service-test Service stopped responding to GET requests on reused connections
Sep 23 12:59:00.403 E ns/e2e-k8s-service-lb-available-240 svc/service-test Service stopped responding to GET requests over new connections
Sep 23 12:59:00.575 I ns/e2e-k8s-service-lb-available-240 svc/service-test Service started responding to GET requests over new connections
Sep 23 12:59:01.402 E ns/e2e-k8s-service-lb-available-240 svc/service-test Service is not responding to GET requests on reused connections
Sep 23 12:59:01.578 I ns/e2e-k8s-service-lb-available-240 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1600867261.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 35m11s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 4m16s of 35m11s (12%):

Sep 23 12:55:30.340 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 23 12:55:31.191 E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 23 12:55:31.546 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 23 12:56:22.191 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 12:56:22.484 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 23 12:56:31.191 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 23 12:56:31.521 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 23 12:57:17.191 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 12:57:18.191 E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 23 12:57:18.573 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 12:57:22.481 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 12:57:23.191 - 2s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 23 12:57:25.649 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 23 12:57:25.764 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 12:57:26.066 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 23 12:57:26.191 - 1s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 23 12:57:26.412 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 23 12:57:27.518 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 12:57:27.522 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 12:57:28.191 - 2s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 23 12:57:31.467 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 23 12:57:31.487 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 23 12:57:32.191 E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 23 12:57:32.525 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 23 12:57:48.191 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 12:57:49.191 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 23 12:57:50.191 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 23 12:57:51.191 - 4s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 23 12:57:55.507 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 23 12:57:58.525 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 23 12:58:17.191 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 12:58:17.191 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 12:58:17.191 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Sep 23 12:58:17.461 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 23 12:58:17.486 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 12:58:17.487 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Sep 23 12:59:00.191 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 23 12:59:01.191 E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 23 12:59:01.354 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 23 13:07:31.051 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 23 13:07:31.191 - 9s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 23 13:07:40.459 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 23 13:07:41.203 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 13:07:42.191 - 16s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 23 13:07:46.063 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 13:07:46.190 - 19s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 23 13:07:55.191 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 23 13:07:55.422 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 23 13:07:59.498 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 23 13:08:05.424 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 13:08:09.191 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Sep 23 13:08:09.413 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Sep 23 13:08:09.498 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 13:08:10.191 - 9s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 23 13:08:15.424 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 13:08:16.191 - 7s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 23 13:08:19.830 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 23 13:08:23.476 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 13:10:44.191 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 13:10:44.489 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 23 13:10:50.191 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 13:10:50.520 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 13:10:56.191 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 13:10:57.191 - 26s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 23 13:11:01.191 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 13:11:01.455 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 13:11:04.191 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 23 13:11:04.480 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 23 13:11:12.191 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 13:11:12.462 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 13:11:23.191 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 13:11:23.521 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 13:11:23.536 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 23 13:14:31.191 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 13:14:31.475 I ns/openshift-console route/console Route started responding to GET requests over new connections
Sep 23 13:16:51.191 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Sep 23 13:16:51.191 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Sep 23 13:16:51.191 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 13:16:51.483 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 13:16:52.191 - 41s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Sep 23 13:16:52.191 - 41s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Sep 23 13:17:02.191 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Sep 23 13:17:03.191 - 29s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Sep 23 13:17:32.468 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Sep 23 13:17:33.665 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Sep 23 13:17:33.668 I ns/openshift-console route/console Route started responding to GET requests over new connections
				from junit_upgrade_1600867261.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 35m11s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1m28s of 35m11s (4%):

Sep 23 12:56:24.071 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: unexpected EOF
Sep 23 12:56:24.071 E kube-apiserver Kube API started failing: Get https://api.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: unexpected EOF
Sep 23 12:56:25.054 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 12:56:25.054 - 1s    E kube-apiserver Kube API is not responding to GET requests
Sep 23 12:56:25.347 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 12:56:26.122 I kube-apiserver Kube API started responding to GET requests
Sep 23 12:57:57.054 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 23 12:57:57.123 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:08:19.054 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 23 13:08:20.054 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:08:34.127 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:08:50.054 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Sep 23 13:08:50.126 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:11:54.054 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 23 13:11:55.054 - 13s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:12:09.123 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:12:26.054 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 23 13:12:27.054 - 7s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:12:34.184 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:12:37.183 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:12:38.054 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:12:40.324 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:12:43.327 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:12:44.054 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:12:44.123 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:12:52.544 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:12:53.054 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:12:53.123 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:13:07.903 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:13:08.054 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:13:08.123 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:13:10.975 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:13:11.054 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:13:17.214 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:13:20.191 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:13:20.260 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:13:29.409 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:13:30.054 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:13:30.123 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:13:35.551 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:13:35.620 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:13:44.767 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:13:45.054 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:13:47.909 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:13:50.911 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:13:51.054 - 6s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:13:57.125 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:15:21.054 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Sep 23 13:15:22.054 - 16s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:15:39.379 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:15:42.381 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:15:43.054 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:15:43.123 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:15:45.453 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:15:46.054 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:15:46.122 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:15:48.525 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:15:49.054 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:15:51.670 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:15:57.741 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:15:57.811 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:16:00.814 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:16:00.883 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:16:06.958 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:16:07.030 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:16:10.029 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:16:10.054 - 3s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:16:13.171 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:16:16.174 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:16:16.244 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:16:19.245 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:16:19.314 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:16:22.318 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:16:22.389 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:16:25.389 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:16:25.459 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:16:28.462 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:16:29.054 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:16:34.675 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:16:37.678 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:16:38.054 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:16:38.124 I openshift-apiserver OpenShift API started responding to GET requests
Sep 23 13:16:46.893 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Sep 23 13:16:46.965 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1600867261.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 36m16s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
194 error level events were detected during this test run:

Sep 23 12:48:10.120 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 508)
Sep 23 12:48:20.218 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-6c47dc85f-qvd2w node/ip-10-0-132-196.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): :"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 6"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 5; 2 nodes are at revision 6" to "Available: 3 nodes are active; 3 nodes are at revision 6"\nI0923 12:44:50.248226       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"47a492b4-408a-4697-a326-b3a0f43f6cb3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-6 -n openshift-kube-apiserver: cause by changes in data.status\nI0923 12:44:58.454233       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"47a492b4-408a-4697-a326-b3a0f43f6cb3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ip-10-0-131-130.us-west-2.compute.internal -n openshift-kube-apiserver because it was missing\nW0923 12:47:46.643318       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19286 (19904)\nW0923 12:48:10.965960       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19904 (20027)\nW0923 12:48:17.267169       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20027 (20100)\nI0923 12:48:19.150789       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0923 12:48:19.150913       1 leaderelection.go:66] leaderelection lost\n
Sep 23 12:49:50.439 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7c5779ccd8-lhpnb node/ip-10-0-132-196.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): 3: connect: connection refused\\nI0923 12:44:23.982685       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\\nF0923 12:44:23.982764       1 controllermanager.go:291] leaderelection lost\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-131-130.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-131-130.us-west-2.compute.internal container=\"kube-controller-manager-6\" is not ready"\nI0923 12:44:36.740085       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"b1a713e1-2692-4819-893e-c42a4775e223", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-131-130.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-131-130.us-west-2.compute.internal container=\"kube-controller-manager-6\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nW0923 12:47:46.645119       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19286 (19913)\nW0923 12:48:11.051997       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19913 (20050)\nW0923 12:48:17.306624       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20050 (20107)\nI0923 12:49:49.869757       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0923 12:49:49.869813       1 leaderelection.go:66] leaderelection lost\n
Sep 23 12:50:40.579 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 50:39.798148       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0923 12:50:39.798155       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0923 12:50:39.798162       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0923 12:50:39.798168       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0923 12:50:39.798174       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0923 12:50:39.798180       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0923 12:50:39.798186       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0923 12:50:39.798193       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0923 12:50:39.798199       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0923 12:50:39.798206       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0923 12:50:39.798216       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0923 12:50:39.798224       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0923 12:50:39.798231       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0923 12:50:39.798239       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0923 12:50:39.798269       1 server.go:692] external host was not specified, using 10.0.132.196\nI0923 12:50:39.798397       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0923 12:50:39.798564       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 23 12:51:00.628 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 50:59.561842       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0923 12:50:59.561846       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0923 12:50:59.561850       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0923 12:50:59.561854       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0923 12:50:59.561857       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0923 12:50:59.561861       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0923 12:50:59.561864       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0923 12:50:59.561868       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0923 12:50:59.561871       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0923 12:50:59.561875       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0923 12:50:59.561881       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0923 12:50:59.561885       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0923 12:50:59.561889       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0923 12:50:59.561894       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0923 12:50:59.561916       1 server.go:692] external host was not specified, using 10.0.132.196\nI0923 12:50:59.562022       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0923 12:50:59.562183       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 23 12:51:11.655 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-5f755cc54b-2g49p node/ip-10-0-132-196.us-west-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): resource version: 14101 (18192)\nW0923 12:44:13.849298       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 14099 (18194)\nW0923 12:44:13.909679       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.APIServer ended with: too old resource version: 13919 (18194)\nW0923 12:44:13.967302       1 reflector.go:299] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: watch of *unstructured.Unstructured ended with: too old resource version: 16246 (18194)\nW0923 12:44:14.004616       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 13919 (18194)\nW0923 12:44:14.056057       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 16246 (18194)\nW0923 12:44:19.044654       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18215 (18221)\nW0923 12:47:46.646955       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19286 (19913)\nW0923 12:48:11.051963       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19913 (20050)\nW0923 12:48:17.305264       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20050 (20107)\nI0923 12:51:10.572794       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0923 12:51:10.573102       1 builder.go:217] server exited\n
Sep 23 12:51:22.477 E ns/openshift-machine-api pod/machine-api-operator-69549fb758-f8zgs node/ip-10-0-131-130.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Sep 23 12:51:31.713 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 51:31.557937       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0923 12:51:31.557941       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0923 12:51:31.557945       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0923 12:51:31.557949       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0923 12:51:31.557952       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0923 12:51:31.557956       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0923 12:51:31.557960       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0923 12:51:31.557963       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0923 12:51:31.557967       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0923 12:51:31.557974       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0923 12:51:31.557982       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0923 12:51:31.557987       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0923 12:51:31.557992       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0923 12:51:31.557997       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0923 12:51:31.558019       1 server.go:692] external host was not specified, using 10.0.132.196\nI0923 12:51:31.558119       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0923 12:51:31.558285       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 23 12:51:54.765 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=19213&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:51:54.006118       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=21518&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:51:54.007199       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=21403&timeout=5m48s&timeoutSeconds=348&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:51:54.008294       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=17637&timeout=6m51s&timeoutSeconds=411&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:51:54.009423       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=17638&timeout=9m44s&timeoutSeconds=584&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:51:54.010737       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=21477&timeoutSeconds=403&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0923 12:51:54.045670       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0923 12:51:54.045686       1 server.go:264] leaderelection lost\n
Sep 23 12:51:54.829 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): ection refused\nE0923 12:51:54.098219       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/openshiftcontrollermanagers?allowWatchBookmarks=true&resourceVersion=18195&timeout=5m52s&timeoutSeconds=352&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:51:54.099218       1 reflector.go:280] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to watch *v1.DeploymentConfig: Get https://localhost:6443/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=18278&timeout=6m47s&timeoutSeconds=407&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:51:54.100636       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Role: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=17637&timeout=6m18s&timeoutSeconds=378&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:51:54.101707       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/kubeapiservers?allowWatchBookmarks=true&resourceVersion=21499&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:51:54.102732       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/openshiftapiservers?allowWatchBookmarks=true&resourceVersion=18194&timeout=7m54s&timeoutSeconds=474&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0923 12:51:54.159960       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0923 12:51:54.160042       1 controllermanager.go:291] leaderelection lost\n
Sep 23 12:52:04.264 E ns/openshift-cluster-node-tuning-operator pod/tuned-kdt7v node/ip-10-0-142-66.us-west-2.compute.internal container=tuned container exited with code 143 (Error): uned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-23 12:44:37,492 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-09-23 12:44:37,493 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-09-23 12:44:37,600 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-23 12:44:37,602 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0923 12:44:50.674096   21346 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-daemonset-upgrade-5782/ds1-skrbh) labels changed node wide: true\nI0923 12:44:52.189057   21346 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:44:52.190670   21346 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:44:52.306811   21346 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 12:44:55.521545   21346 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-240/service-test-bfxbm) labels changed node wide: true\nI0923 12:44:57.189055   21346 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:44:57.190571   21346 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:44:57.305724   21346 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 12:45:02.588448   21346 openshift-tuned.go:550] Pod (e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-4983/pod-secrets-dad63a4d-a457-474c-9223-0f99e557c598) labels changed node wide: false\nI0923 12:45:04.238441   21346 openshift-tuned.go:550] Pod (e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-8760/pod-configmap-379699a5-8fae-4a4c-9c88-3f3922a16c6e) labels changed node wide: false\nE0923 12:51:43.914596   21346 openshift-tuned.go:881] Pod event watch channel closed.\nI0923 12:51:43.914626   21346 openshift-tuned.go:883] Increasing resyncPeriod to 116\n
Sep 23 12:52:04.477 E ns/openshift-cluster-node-tuning-operator pod/tuned-w688d node/ip-10-0-142-97.us-west-2.compute.internal container=tuned container exited with code 143 (Error): etting recommended profile...\nI0923 12:44:52.373576   22385 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 12:44:55.535499   22385 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-240/service-test-njsw2) labels changed node wide: true\nI0923 12:44:57.256204   22385 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:44:57.259550   22385 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:44:57.422962   22385 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 12:44:59.376523   22385 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-9560/dp-657fc4b57d-lzbxm) labels changed node wide: true\nI0923 12:45:02.255987   22385 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:45:02.257785   22385 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:45:02.381273   22385 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 12:45:02.646488   22385 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-9560/dp-857d95bf59-4clnp) labels changed node wide: true\nI0923 12:45:07.255986   22385 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:45:07.257535   22385 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:45:07.373685   22385 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 12:45:07.374177   22385 openshift-tuned.go:550] Pod (e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-4983/pod-secrets-dad63a4d-a457-474c-9223-0f99e557c598) labels changed node wide: false\nE0923 12:51:43.984160   22385 openshift-tuned.go:881] Pod event watch channel closed.\nI0923 12:51:43.984187   22385 openshift-tuned.go:883] Increasing resyncPeriod to 102\n
Sep 23 12:52:04.633 E ns/openshift-cluster-node-tuning-operator pod/tuned-gkb9q node/ip-10-0-131-130.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 923 12:51:17.365138   46529 openshift-tuned.go:550] Pod (openshift-apiserver-operator/openshift-apiserver-operator-55bb9b9785-pr4kk) labels changed node wide: true\nI0923 12:51:17.681356   46529 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:51:17.682915   46529 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:51:17.800689   46529 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 12:51:19.438021   46529 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-7-ip-10-0-131-130.us-west-2.compute.internal) labels changed node wide: false\nI0923 12:51:26.622509   46529 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/kube-controller-manager-ip-10-0-131-130.us-west-2.compute.internal) labels changed node wide: true\nI0923 12:51:27.681337   46529 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:51:27.683165   46529 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:51:27.826631   46529 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 12:51:31.988071   46529 openshift-tuned.go:550] Pod (openshift-machine-api/machine-api-operator-69549fb758-f8zgs) labels changed node wide: true\nI0923 12:51:32.681347   46529 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:51:32.683314   46529 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:51:32.780927   46529 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 12:51:43.913748   46529 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0923 12:51:43.921923   46529 openshift-tuned.go:881] Pod event watch channel closed.\nI0923 12:51:43.921940   46529 openshift-tuned.go:883] Increasing resyncPeriod to 136\n
Sep 23 12:52:52.576 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 52:51.810423       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0923 12:52:51.810430       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0923 12:52:51.810436       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0923 12:52:51.810443       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0923 12:52:51.810449       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0923 12:52:51.810454       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0923 12:52:51.810462       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0923 12:52:51.810468       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0923 12:52:51.810474       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0923 12:52:51.810481       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0923 12:52:51.810490       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0923 12:52:51.810497       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0923 12:52:51.810505       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0923 12:52:51.810513       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0923 12:52:51.810542       1 server.go:692] external host was not specified, using 10.0.147.217\nI0923 12:52:51.810652       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0923 12:52:51.810823       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 23 12:53:11.698 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 53:10.896216       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0923 12:53:10.896220       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0923 12:53:10.896224       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0923 12:53:10.896228       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0923 12:53:10.896231       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0923 12:53:10.896235       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0923 12:53:10.896238       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0923 12:53:10.896242       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0923 12:53:10.896245       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0923 12:53:10.896249       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0923 12:53:10.896255       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0923 12:53:10.896259       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0923 12:53:10.896263       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0923 12:53:10.896268       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0923 12:53:10.896287       1 server.go:692] external host was not specified, using 10.0.147.217\nI0923 12:53:10.896383       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0923 12:53:10.896547       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 23 12:53:38.800 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 53:37.883953       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0923 12:53:37.883958       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0923 12:53:37.883961       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0923 12:53:37.883965       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0923 12:53:37.883969       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0923 12:53:37.883973       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0923 12:53:37.883976       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0923 12:53:37.883979       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0923 12:53:37.883983       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0923 12:53:37.883987       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0923 12:53:37.883992       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0923 12:53:37.883997       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0923 12:53:37.884002       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0923 12:53:37.884007       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0923 12:53:37.884035       1 server.go:692] external host was not specified, using 10.0.147.217\nI0923 12:53:37.884159       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0923 12:53:37.884364       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 23 12:53:55.169 E ns/openshift-insights pod/insights-operator-d6c5489ff-47bm2 node/ip-10-0-131-130.us-west-2.compute.internal container=operator container exited with code 2 (Error): cal\nI0923 12:50:33.689823       1 httplog.go:90] GET /metrics: (5.610918ms) 200 [Prometheus/2.14.0 10.129.2.15:33554]\nI0923 12:50:53.538395       1 httplog.go:90] GET /metrics: (5.969914ms) 200 [Prometheus/2.14.0 10.128.2.11:42214]\nI0923 12:51:03.690353       1 httplog.go:90] GET /metrics: (6.118664ms) 200 [Prometheus/2.14.0 10.129.2.15:33554]\nI0923 12:51:23.539592       1 httplog.go:90] GET /metrics: (7.261752ms) 200 [Prometheus/2.14.0 10.128.2.11:42214]\nI0923 12:51:25.683072       1 configobserver.go:68] Refreshing configuration from cluster pull secret\nI0923 12:51:25.687301       1 configobserver.go:93] Found cloud.openshift.com token\nI0923 12:51:25.687327       1 configobserver.go:110] Refreshing configuration from cluster secret\nI0923 12:51:33.690181       1 httplog.go:90] GET /metrics: (5.944019ms) 200 [Prometheus/2.14.0 10.129.2.15:33554]\nI0923 12:51:53.539830       1 httplog.go:90] GET /metrics: (7.333296ms) 200 [Prometheus/2.14.0 10.128.2.11:42214]\nI0923 12:52:03.690679       1 httplog.go:90] GET /metrics: (6.426647ms) 200 [Prometheus/2.14.0 10.129.2.15:33554]\nI0923 12:52:23.538212       1 httplog.go:90] GET /metrics: (5.864628ms) 200 [Prometheus/2.14.0 10.128.2.11:42214]\nI0923 12:52:25.669204       1 status.go:314] The operator is healthy\nI0923 12:52:25.669258       1 status.go:423] No status update necessary, objects are identical\nI0923 12:52:33.690784       1 httplog.go:90] GET /metrics: (6.583157ms) 200 [Prometheus/2.14.0 10.129.2.15:33554]\nI0923 12:52:53.540049       1 httplog.go:90] GET /metrics: (7.592324ms) 200 [Prometheus/2.14.0 10.128.2.11:42214]\nI0923 12:53:03.689942       1 httplog.go:90] GET /metrics: (5.786222ms) 200 [Prometheus/2.14.0 10.129.2.15:33554]\nI0923 12:53:23.539257       1 httplog.go:90] GET /metrics: (6.948361ms) 200 [Prometheus/2.14.0 10.128.2.11:42214]\nI0923 12:53:33.689707       1 httplog.go:90] GET /metrics: (5.542673ms) 200 [Prometheus/2.14.0 10.129.2.15:33554]\nI0923 12:53:53.542182       1 httplog.go:90] GET /metrics: (9.524998ms) 200 [Prometheus/2.14.0 10.128.2.11:42214]\n
Sep 23 12:53:56.174 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-5dd8785f97-pz265 node/ip-10-0-131-130.us-west-2.compute.internal container=operator container exited with code 255 (Error): threcorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0923 12:53:19.697715       1 httplog.go:90] GET /metrics: (1.162899ms) 200 [Prometheus/2.14.0 10.128.2.11:38850]\nI0923 12:53:24.456797       1 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync\nI0923 12:53:26.025275       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0923 12:53:36.035061       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0923 12:53:42.037363       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0923 12:53:42.037384       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0923 12:53:42.039171       1 httplog.go:90] GET /metrics: (6.130115ms) 200 [Prometheus/2.14.0 10.129.2.15:43228]\nI0923 12:53:43.164014       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 1 items received\nI0923 12:53:45.147968       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 1 items received\nI0923 12:53:46.044410       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0923 12:53:49.696931       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0923 12:53:49.696952       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0923 12:53:49.698110       1 httplog.go:90] GET /metrics: (1.294025ms) 200 [Prometheus/2.14.0 10.128.2.11:38850]\nI0923 12:53:55.239288       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0923 12:53:55.239409       1 leaderelection.go:66] leaderelection lost\n
Sep 23 12:53:58.744 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (159 of 508)\n* Could not update deployment "openshift-cloud-credential-operator/cloud-credential-operator" (142 of 508)\n* Could not update deployment "openshift-cluster-samples-operator/cluster-samples-operator" (256 of 508)\n* Could not update deployment "openshift-console/downloads" (326 of 508)\n* Could not update deployment "openshift-image-registry/cluster-image-registry-operator" (197 of 508)\n* Could not update deployment "openshift-machine-api/cluster-autoscaler-operator" (180 of 508)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (385 of 508)\n* Could not update deployment "openshift-operator-lifecycle-manager/catalog-operator" (365 of 508)\n* Could not update deployment "openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator" (293 of 508)
Sep 23 12:53:59.869 E ns/openshift-monitoring pod/node-exporter-zqn5s node/ip-10-0-157-63.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 9-23T12:38:19Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-23T12:38:19Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 23 12:54:07.251 E ns/openshift-cluster-node-tuning-operator pod/tuned-twmpk node/ip-10-0-131-130.us-west-2.compute.internal container=tuned container exited with code 143 (Error): ofile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 12:53:32.394934   60652 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-5-ip-10-0-131-130.us-west-2.compute.internal) labels changed node wide: false\nI0923 12:53:40.437383   60652 openshift-tuned.go:550] Pod (openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-131-130.us-west-2.compute.internal) labels changed node wide: true\nI0923 12:53:44.917241   60652 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:53:44.919501   60652 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:53:45.095312   60652 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 12:53:47.842171   60652 openshift-tuned.go:550] Pod (openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator-764b59ff47-f7wx4) labels changed node wide: true\nI0923 12:53:49.918019   60652 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:53:49.919655   60652 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:53:50.080404   60652 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 12:53:50.698168   60652 openshift-tuned.go:550] Pod (openshift-authentication-operator/authentication-operator-bd89cfd7c-bqlbd) labels changed node wide: true\nI0923 12:53:54.934193   60652 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:53:54.945049   60652 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:53:55.223360   60652 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0923 12:53:59.451932   60652 openshift-tuned.go:881] Pod event watch channel closed.\nI0923 12:53:59.452024   60652 openshift-tuned.go:883] Increasing resyncPeriod to 104\n
Sep 23 12:54:07.440 E ns/openshift-cluster-node-tuning-operator pod/tuned-ntklb node/ip-10-0-157-63.us-west-2.compute.internal container=tuned container exited with code 143 (Error): enshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0923 12:52:09.816470   40774 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:52:09.942058   40774 openshift-tuned.go:635] Active profile () != recommended profile (openshift-node)\nI0923 12:52:09.942106   40774 openshift-tuned.go:263] Starting tuned...\n2020-09-23 12:52:10,055 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-09-23 12:52:10,060 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-09-23 12:52:10,061 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-09-23 12:52:10,062 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-09-23 12:52:10,063 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-09-23 12:52:10,100 INFO     tuned.daemon.controller: starting controller\n2020-09-23 12:52:10,100 INFO     tuned.daemon.daemon: starting tuning\n2020-09-23 12:52:10,107 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-23 12:52:10,107 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-23 12:52:10,110 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-23 12:52:10,112 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-09-23 12:52:10,113 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-09-23 12:52:10,226 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-23 12:52:10,227 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0923 12:52:17.194754   40774 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-tbszl) labels changed node wide: false\nE0923 12:53:59.435045   40774 openshift-tuned.go:881] Pod event watch channel closed.\nI0923 12:53:59.435071   40774 openshift-tuned.go:883] Increasing resyncPeriod to 110\n
Sep 23 12:54:10.876 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): or=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=23384&timeoutSeconds=434&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:54:09.467647       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=16445&timeout=5m35s&timeoutSeconds=335&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:54:09.468769       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=19213&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:54:09.470763       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=16453&timeout=9m48s&timeoutSeconds=588&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:54:09.472027       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=16453&timeout=9m26s&timeoutSeconds=566&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:54:09.473271       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=19179&timeout=6m20s&timeoutSeconds=380&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0923 12:54:10.341771       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0923 12:54:10.341795       1 server.go:264] leaderelection lost\n
Sep 23 12:54:22.615 E ns/openshift-monitoring pod/prometheus-adapter-7d9f946477-phdb6 node/ip-10-0-157-63.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0923 12:40:33.681207       1 adapter.go:93] successfully using in-cluster auth\nI0923 12:40:34.219290       1 secure_serving.go:116] Serving securely on [::]:6443\n
Sep 23 12:55:00.411 E ns/openshift-console-operator pod/console-operator-766656f7df-zppw2 node/ip-10-0-131-130.us-west-2.compute.internal container=console-operator container exited with code 255 (Error): 8 (21961)\nW0923 12:53:59.624010       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20788 (22585)\nW0923 12:53:59.624175       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 21572 (22585)\nW0923 12:53:59.624245       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 20552 (21910)\nW0923 12:53:59.624316       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 18194 (22585)\nW0923 12:53:59.624460       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 18188 (22749)\nW0923 12:53:59.627911       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 18996 (21912)\nW0923 12:54:00.205020       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 17548 (23421)\nW0923 12:54:00.230375       1 reflector.go:299] github.com/openshift/client-go/console/informers/externalversions/factory.go:101: watch of *v1.ConsoleCLIDownload ended with: too old resource version: 18189 (23423)\nW0923 12:54:00.273438       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 17549 (23427)\nW0923 12:54:26.269092       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 24365 (24485)\nI0923 12:54:59.667842       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0923 12:54:59.667919       1 leaderelection.go:66] leaderelection lost\n
Sep 23 12:55:08.138 E ns/openshift-ingress pod/router-default-8d6b447bc-vcpmt node/ip-10-0-142-97.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:14.491467       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:19.520967       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:24.485052       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:29.594968       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:34.566083       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:39.563996       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:47.897389       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:52.801518       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:57.795148       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:55:02.794329       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Sep 23 12:55:13.344 E ns/openshift-monitoring pod/node-exporter-7925k node/ip-10-0-132-196.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 9-23T12:33:37Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-23T12:33:37Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 23 12:55:17.510 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 55:16.804382       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0923 12:55:16.804389       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0923 12:55:16.804395       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0923 12:55:16.804407       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0923 12:55:16.804413       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0923 12:55:16.804419       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0923 12:55:16.804425       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0923 12:55:16.804431       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0923 12:55:16.804437       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0923 12:55:16.804453       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0923 12:55:16.804462       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0923 12:55:16.804470       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0923 12:55:16.804478       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0923 12:55:16.804490       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0923 12:55:16.804701       1 server.go:692] external host was not specified, using 10.0.131.130\nI0923 12:55:16.804932       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0923 12:55:16.805557       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 23 12:55:28.957 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-09-23T12:54:33.311Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-23T12:54:33.315Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-23T12:54:33.316Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-23T12:54:33.317Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-23T12:54:33.317Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-09-23T12:54:33.317Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-23T12:54:33.317Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-23T12:54:33.317Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-23T12:54:33.317Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-23T12:54:33.317Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-23T12:54:33.317Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-23T12:54:33.317Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-09-23T12:54:33.317Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-23T12:54:33.317Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-23T12:54:33.322Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-23T12:54:33.322Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-09-23
Sep 23 12:55:30.178 E ns/openshift-marketplace pod/redhat-operators-5857bc7c87-mchbp node/ip-10-0-142-97.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Sep 23 12:55:30.969 E ns/openshift-ingress pod/router-default-8d6b447bc-xhshz node/ip-10-0-142-66.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:39.491467       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:47.802206       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:52.797976       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:54:57.801582       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:55:02.809006       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:55:07.820623       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:55:12.824993       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:55:17.807430       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:55:22.823110       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0923 12:55:29.069686       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Sep 23 12:55:32.566 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 55:32.399567       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0923 12:55:32.399589       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0923 12:55:32.399611       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0923 12:55:32.399632       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0923 12:55:32.399654       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0923 12:55:32.399676       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0923 12:55:32.399698       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0923 12:55:32.399719       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0923 12:55:32.399741       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0923 12:55:32.399763       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0923 12:55:32.399789       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0923 12:55:32.399813       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0923 12:55:32.399836       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0923 12:55:32.399893       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0923 12:55:32.399949       1 server.go:692] external host was not specified, using 10.0.131.130\nI0923 12:55:32.400103       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0923 12:55:32.400284       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 23 12:55:34.115 E ns/openshift-monitoring pod/node-exporter-j8sw8 node/ip-10-0-147-217.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 9-23T12:33:35Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-23T12:33:35Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 23 12:55:35.044 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-66.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/09/23 12:40:49 Watching directory: "/etc/alertmanager/config"\n
Sep 23 12:55:35.044 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-66.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/23 12:40:49 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/23 12:40:49 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/23 12:40:49 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/23 12:40:49 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/23 12:40:49 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/23 12:40:49 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/23 12:40:49 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/23 12:40:49 http.go:106: HTTPS: listening on [::]:9095\n
Sep 23 12:55:37.200 E ns/openshift-monitoring pod/thanos-querier-85cb667f66-w2txr node/ip-10-0-142-97.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/09/23 12:41:22 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/23 12:41:22 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/23 12:41:22 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/23 12:41:22 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/23 12:41:22 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/23 12:41:22 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/09/23 12:41:22 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/23 12:41:22 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/23 12:41:22 http.go:106: HTTPS: listening on [::]:9091\n
Sep 23 12:55:38.200 E ns/openshift-marketplace pod/community-operators-65bb78445b-bs9kn node/ip-10-0-142-97.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Sep 23 12:55:58.264 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-97.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-09-23T12:55:47.123Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-23T12:55:47.126Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-23T12:55:47.127Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-23T12:55:47.128Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-23T12:55:47.128Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-09-23T12:55:47.128Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-23T12:55:47.128Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-23T12:55:47.128Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-23T12:55:47.128Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-23T12:55:47.128Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-23T12:55:47.128Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-23T12:55:47.128Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-09-23T12:55:47.128Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-23T12:55:47.128Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-23T12:55:47.129Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-23T12:55:47.129Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-09-23
Sep 23 12:55:59.099 E ns/openshift-marketplace pod/certified-operators-6fdcf954c6-zl58h node/ip-10-0-142-66.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Sep 23 12:56:03.685 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 56:03.177282       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0923 12:56:03.177289       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0923 12:56:03.177295       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0923 12:56:03.177301       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0923 12:56:03.177308       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0923 12:56:03.177314       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0923 12:56:03.177320       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0923 12:56:03.177326       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0923 12:56:03.177332       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0923 12:56:03.177338       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0923 12:56:03.177348       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0923 12:56:03.177357       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0923 12:56:03.177366       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0923 12:56:03.177374       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0923 12:56:03.177404       1 server.go:692] external host was not specified, using 10.0.131.130\nI0923 12:56:03.177515       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0923 12:56:03.177690       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Sep 23 12:56:08.668 E ns/openshift-service-ca pod/apiservice-cabundle-injector-6cc488d794-k87wx node/ip-10-0-131-130.us-west-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Sep 23 12:56:08.722 E ns/openshift-service-ca pod/configmap-cabundle-injector-75d5d8586d-7msnd node/ip-10-0-131-130.us-west-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Sep 23 12:56:13.221 E ns/openshift-console pod/console-857fb674d5-4fjbr node/ip-10-0-147-217.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/09/23 12:39:53 cmd/main: cookies are secure!\n2020/09/23 12:39:53 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/23 12:40:03 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/23 12:40:13 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/23 12:40:23 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/23 12:40:33 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/23 12:40:43 cmd/main: Binding to [::]:8443...\n2020/09/23 12:40:43 cmd/main: using TLS\n
Sep 23 12:56:20.523 E ns/openshift-console pod/console-857fb674d5-k6g47 node/ip-10-0-132-196.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/09/23 12:40:01 cmd/main: cookies are secure!\n2020/09/23 12:40:01 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/23 12:40:11 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/23 12:40:21 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/23 12:40:31 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/09/23 12:40:41 cmd/main: Binding to [::]:8443...\n2020/09/23 12:40:41 cmd/main: using TLS\n
Sep 23 12:56:34.751 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=kube-controller-manager-7 container exited with code 255 (Error): timeout=6m56s&timeoutSeconds=416&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:34.259747       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/console.openshift.io/v1/consolelinks?allowWatchBookmarks=true&resourceVersion=21674&timeout=6m1s&timeoutSeconds=361&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:34.260914       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:6443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=18384&timeout=5m36s&timeoutSeconds=336&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:34.261922       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/k8s.cni.cncf.io/v1/network-attachment-definitions?allowWatchBookmarks=true&resourceVersion=21676&timeout=6m17s&timeoutSeconds=377&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:34.263030       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://localhost:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=23567&timeout=9m25s&timeoutSeconds=565&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0923 12:56:34.600305       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nI0923 12:56:34.600359       1 event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-131-130_f26f5b73-5f88-49c6-9a49-81fec2687253 stopped leading\nF0923 12:56:34.600383       1 controllermanager.go:291] leaderelection lost\n
Sep 23 12:56:35.836 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=26581&timeout=9m19s&timeoutSeconds=559&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:34.223062       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=26307&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:34.224306       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=18405&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:34.227879       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=24985&timeout=9m34s&timeoutSeconds=574&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:34.229298       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=26583&timeoutSeconds=461&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:34.231046       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=18405&timeout=9m51s&timeoutSeconds=591&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0923 12:56:34.962975       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded\nF0923 12:56:34.963004       1 server.go:264] leaderelection lost\n
Sep 23 12:56:47.792 E ns/openshift-cluster-node-tuning-operator pod/tuned-dnspz node/ip-10-0-131-130.us-west-2.compute.internal container=tuned container exited with code 143 (Error): o:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 12:55:11.966798   67467 openshift-tuned.go:550] Pod (openshift-image-registry/node-ca-jcxrv) labels changed node wide: true\nI0923 12:55:15.250435   67467 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:55:15.251927   67467 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:55:15.371109   67467 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 12:55:16.690171   67467 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-946b65b75-774cv) labels changed node wide: true\nI0923 12:55:20.250430   67467 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:55:20.251880   67467 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:55:20.351293   67467 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 12:56:02.053244   67467 openshift-tuned.go:550] Pod (openshift-console/console-5f894f5698-hst65) labels changed node wide: true\nI0923 12:56:05.250442   67467 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:56:05.252235   67467 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:56:05.399934   67467 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 12:56:21.958860   67467 openshift-tuned.go:550] Pod (openshift-service-ca/service-serving-cert-signer-5894687958-sbv8k) labels changed node wide: true\nI0923 12:56:24.039172   67467 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0923 12:56:24.043091   67467 openshift-tuned.go:881] Pod event watch channel closed.\nI0923 12:56:24.043110   67467 openshift-tuned.go:883] Increasing resyncPeriod to 118\n
Sep 23 12:56:48.097 E ns/openshift-cluster-node-tuning-operator pod/tuned-nzgx5 node/ip-10-0-157-63.us-west-2.compute.internal container=tuned container exited with code 143 (Error): s to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:54:30.371892   47938 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:54:30.493039   47938 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 12:54:37.590134   47938 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-7c48966d94-f7dll) labels changed node wide: true\nI0923 12:54:40.370364   47938 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:54:40.425429   47938 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:54:40.557822   47938 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 12:54:57.203868   47938 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-85cb667f66-4tlg8) labels changed node wide: true\nI0923 12:55:00.370348   47938 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:55:00.412390   47938 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:55:00.526332   47938 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 12:55:27.187503   47938 openshift-tuned.go:550] Pod (openshift-monitoring/telemeter-client-7fff8cd4fc-j6kxh) labels changed node wide: true\nI0923 12:55:30.370374   47938 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 12:55:30.379729   47938 openshift-tuned.go:441] Getting recommended profile...\nI0923 12:55:30.618487   47938 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 12:56:24.040167   47938 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0923 12:56:24.044041   47938 openshift-tuned.go:881] Pod event watch channel closed.\nI0923 12:56:24.044061   47938 openshift-tuned.go:883] Increasing resyncPeriod to 114\n
Sep 23 12:57:12.858 E ns/openshift-sdn pod/sdn-controller-kxkl4 node/ip-10-0-131-130.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0923 12:28:13.257495       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0923 12:33:48.657168       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.159.23:6443: i/o timeout\n
Sep 23 12:57:16.279 E ns/openshift-sdn pod/sdn-lm7m2 node/ip-10-0-142-66.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ndrobin.go:270] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.131.130:6443 10.0.132.196:6443 10.0.147.217:6443]\nI0923 12:56:54.195784    1936 roundrobin.go:218] Delete endpoint 10.0.131.130:6443 for service "default/kubernetes:https"\nI0923 12:56:54.315562    1936 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:56:54.315589    1936 proxier.go:350] userspace syncProxyRules took 28.44744ms\nI0923 12:57:01.052713    1936 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.131.130:6443 10.0.132.196:6443 10.0.147.217:6443]\nI0923 12:57:01.052756    1936 roundrobin.go:218] Delete endpoint 10.0.131.130:6443 for service "openshift-kube-apiserver/apiserver:https"\nI0923 12:57:01.178349    1936 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:57:01.178385    1936 proxier.go:350] userspace syncProxyRules took 28.547713ms\nI0923 12:57:03.433289    1936 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.9:6443 10.130.0.13:6443]\nI0923 12:57:03.433324    1936 roundrobin.go:218] Delete endpoint 10.128.0.2:6443 for service "openshift-multus/multus-admission-controller:"\nI0923 12:57:03.555941    1936 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:57:03.555967    1936 proxier.go:350] userspace syncProxyRules took 29.326706ms\nI0923 12:57:15.079938    1936 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-240/service-test: to [10.131.0.20:80]\nI0923 12:57:15.079968    1936 roundrobin.go:218] Delete endpoint 10.128.2.14:80 for service "e2e-k8s-service-lb-available-240/service-test:"\nI0923 12:57:15.201111    1936 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:57:15.201136    1936 proxier.go:350] userspace syncProxyRules took 28.192188ms\nF0923 12:57:16.011494    1936 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 23 12:57:26.439 E ns/openshift-sdn pod/sdn-controller-gzxwn node/ip-10-0-147-217.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0923 12:28:28.704763       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Sep 23 12:57:34.465 E ns/openshift-multus pod/multus-admission-controller-22m9d node/ip-10-0-147-217.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Sep 23 12:57:34.546 E ns/openshift-multus pod/multus-dljzm node/ip-10-0-142-97.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 23 12:57:39.480 E ns/openshift-sdn pod/ovs-27ssk node/ip-10-0-147-217.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): th24d7c938 on port 36\n2020-09-23T12:56:12.601Z|00359|connmgr|INFO|br0<->unix#1851: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T12:56:12.625Z|00360|connmgr|INFO|br0<->unix#1854: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T12:56:12.646Z|00361|bridge|INFO|bridge br0: deleted interface veth2c41fc32 on port 37\n2020-09-23T12:56:12.924Z|00362|bridge|INFO|bridge br0: added interface vethab610f7f on port 58\n2020-09-23T12:56:12.952Z|00363|connmgr|INFO|br0<->unix#1857: 5 flow_mods in the last 0 s (5 adds)\n2020-09-23T12:56:12.984Z|00364|connmgr|INFO|br0<->unix#1860: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T12:56:22.590Z|00365|bridge|INFO|bridge br0: added interface vethed7d9175 on port 59\n2020-09-23T12:56:22.624Z|00366|connmgr|INFO|br0<->unix#1873: 5 flow_mods in the last 0 s (5 adds)\n2020-09-23T12:56:22.664Z|00367|connmgr|INFO|br0<->unix#1877: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-09-23T12:56:22.666Z|00368|connmgr|INFO|br0<->unix#1879: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T12:56:22.680Z|00369|bridge|INFO|bridge br0: added interface veth9a252a8f on port 60\n2020-09-23T12:56:22.706Z|00370|connmgr|INFO|br0<->unix#1882: 5 flow_mods in the last 0 s (5 adds)\n2020-09-23T12:56:22.740Z|00371|connmgr|INFO|br0<->unix#1885: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T12:57:33.673Z|00372|connmgr|INFO|br0<->unix#1936: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T12:57:33.698Z|00373|connmgr|INFO|br0<->unix#1939: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T12:57:33.717Z|00374|bridge|INFO|bridge br0: deleted interface veth168b0a01 on port 3\n2020-09-23T12:57:36.007Z|00375|bridge|INFO|bridge br0: added interface vethc8be94bf on port 61\n2020-09-23T12:57:36.040Z|00376|connmgr|INFO|br0<->unix#1943: 5 flow_mods in the last 0 s (5 adds)\n2020-09-23T12:57:36.089Z|00377|connmgr|INFO|br0<->unix#1948: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23 12:57:38 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 23 12:57:49.503 E ns/openshift-sdn pod/sdn-dqz6q node/ip-10-0-147-217.us-west-2.compute.internal container=sdn container exited with code 255 (Error): 18 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-controller-manager/controller-manager:https to [10.128.0.56:8443 10.129.0.63:8443 10.130.0.72:8443]\nI0923 12:57:27.877743    2318 roundrobin.go:218] Delete endpoint 10.129.0.63:8443 for service "openshift-controller-manager/controller-manager:https"\nI0923 12:57:28.003198    2318 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:57:28.003215    2318 proxier.go:350] userspace syncProxyRules took 27.425968ms\nI0923 12:57:33.725553    2318 pod.go:540] CNI_DEL openshift-multus/multus-admission-controller-22m9d\nI0923 12:57:36.061733    2318 pod.go:504] CNI_ADD openshift-multus/multus-admission-controller-prh88 got IP 10.128.0.60, ofport 61\nI0923 12:57:41.447431    2318 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.60:6443 10.129.0.9:6443 10.130.0.13:6443]\nI0923 12:57:41.447458    2318 roundrobin.go:218] Delete endpoint 10.128.0.60:6443 for service "openshift-multus/multus-admission-controller:"\nI0923 12:57:41.461040    2318 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.60:6443 10.129.0.9:6443]\nI0923 12:57:41.461068    2318 roundrobin.go:218] Delete endpoint 10.130.0.13:6443 for service "openshift-multus/multus-admission-controller:"\nI0923 12:57:41.555294    2318 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:57:41.555312    2318 proxier.go:350] userspace syncProxyRules took 24.691425ms\nI0923 12:57:41.653402    2318 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:57:41.653419    2318 proxier.go:350] userspace syncProxyRules took 24.439911ms\nI0923 12:57:45.472090    2318 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0923 12:57:48.452080    2318 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Sep 23 12:58:14.033 E ns/openshift-sdn pod/sdn-nlp8g node/ip-10-0-131-130.us-west-2.compute.internal container=sdn container exited with code 255 (Error): -version-migrator-operator/metrics:https" at 172.30.201.142:443/TCP\nI0923 12:57:53.271741   79093 service.go:357] Adding new service port "openshift-kube-apiserver-operator/metrics:https" at 172.30.163.213:443/TCP\nI0923 12:57:53.271749   79093 service.go:357] Adding new service port "openshift-machine-api/cluster-autoscaler-operator:metrics" at 172.30.77.110:9192/TCP\nI0923 12:57:53.271758   79093 service.go:357] Adding new service port "openshift-machine-api/cluster-autoscaler-operator:https" at 172.30.77.110:443/TCP\nI0923 12:57:53.271991   79093 proxier.go:731] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0923 12:57:53.369716   79093 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:57:53.369736   79093 proxier.go:350] userspace syncProxyRules took 98.564847ms\nI0923 12:57:53.384108   79093 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:57:53.384189   79093 proxier.go:350] userspace syncProxyRules took 112.307565ms\nI0923 12:57:53.434459   79093 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:http" (:30227/tcp)\nI0923 12:57:53.434769   79093 proxier.go:1552] Opened local port "nodePort for e2e-k8s-service-lb-available-240/service-test:" (:30671/tcp)\nI0923 12:57:53.434955   79093 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:https" (:32224/tcp)\nI0923 12:57:53.456596   79093 healthcheck.go:151] Opening healthcheck "openshift-ingress/router-default" on port 32599\nI0923 12:57:53.464554   79093 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0923 12:57:53.464595   79093 cmd.go:173] openshift-sdn network plugin registering startup\nI0923 12:57:53.464680   79093 cmd.go:177] openshift-sdn network plugin ready\nI0923 12:58:12.846974   79093 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0923 12:58:13.367250   79093 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 23 12:58:19.041 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-644b7c9857-ngcj9 node/ip-10-0-131-130.us-west-2.compute.internal container=manager container exited with code 1 (Error): 2:56:25Z" level=info msg="syncing credentials request" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-openstack\ntime="2020-09-23T12:56:25Z" level=debug msg="ignoring cr as it is for a different cloud" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-openstack secret=openshift-machine-api/openstack-cloud-credentials\ntime="2020-09-23T12:56:25Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-openstack secret=openshift-machine-api/openstack-cloud-credentials\ntime="2020-09-23T12:56:25Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-openstack secret=openshift-machine-api/openstack-cloud-credentials\ntime="2020-09-23T12:56:25Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-09-23T12:56:25Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-09-23T12:56:25Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-09-23T12:56:25Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-09-23T12:56:25Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-09-23T12:56:25Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-09-23T12:58:17Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-09-23T12:58:17Z" level=info msg="reconcile complete" controller=metrics elapsed="993.466µs"\ntime="2020-09-23T12:58:18Z" level=error msg="leader election lostunable to run the manager"\n
Sep 23 12:58:28.948 E ns/openshift-sdn pod/ovs-wr5s9 node/ip-10-0-132-196.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): :58:27.611Z|00470|bridge|INFO|bridge br0: deleted interface veth69d6fb57 on port 74\n2020-09-23T12:58:27.611Z|00471|bridge|INFO|bridge br0: deleted interface veth2fca9f61 on port 54\n2020-09-23T12:58:27.611Z|00472|bridge|INFO|bridge br0: deleted interface veth69f41e76 on port 64\n2020-09-23T12:58:27.611Z|00473|bridge|INFO|bridge br0: deleted interface tun0 on port 2\n2020-09-23T12:58:27.611Z|00474|bridge|INFO|bridge br0: deleted interface vetheadf2ff1 on port 72\n2020-09-23T12:58:27.611Z|00475|bridge|INFO|bridge br0: deleted interface veth2f1aea9c on port 70\n2020-09-23T12:58:27.611Z|00476|bridge|INFO|bridge br0: deleted interface veth31a5ccbf on port 65\n2020-09-23T12:58:27.611Z|00477|bridge|INFO|bridge br0: deleted interface vethe7a20107 on port 63\n2020-09-23T12:58:27.611Z|00478|bridge|INFO|bridge br0: deleted interface veth0e69e62e on port 58\n2020-09-23T12:58:27.611Z|00479|bridge|INFO|bridge br0: deleted interface veth4ed22355 on port 3\n2020-09-23T12:58:27.611Z|00480|bridge|INFO|bridge br0: deleted interface vetha989ce84 on port 12\n2020-09-23T12:58:27.611Z|00481|bridge|INFO|bridge br0: deleted interface veth3d7742a4 on port 59\n2020-09-23T12:58:27.611Z|00482|bridge|INFO|bridge br0: deleted interface vethd5c6f7e0 on port 11\n2020-09-23T12:58:27.611Z|00483|bridge|INFO|bridge br0: deleted interface br0 on port 65534\n2020-09-23T12:58:27.611Z|00484|bridge|INFO|bridge br0: deleted interface veth5c7ffa1f on port 69\n2020-09-23T12:58:27.611Z|00485|bridge|INFO|bridge br0: deleted interface vethd0575bdf on port 68\n2020-09-23T12:58:27.611Z|00486|bridge|INFO|bridge br0: deleted interface veth120b3c8e on port 66\n2020-09-23T12:58:27.611Z|00487|bridge|INFO|bridge br0: deleted interface vxlan0 on port 1\n2020-09-23T12:58:27.611Z|00488|bridge|INFO|bridge br0: deleted interface veth6a722ed6 on port 73\n2020-09-23T12:58:27.611Z|00489|bridge|INFO|bridge br0: deleted interface veth5755c31c on port 67\n2020-09-23 12:58:27 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 23 12:58:32.008 E ns/openshift-sdn pod/sdn-24262 node/ip-10-0-132-196.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ift-multus/multus-admission-controller:"\nI0923 12:58:18.950147   79946 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:18.950162   79946 proxier.go:350] userspace syncProxyRules took 24.827992ms\nI0923 12:58:19.010485   79946 roundrobin.go:298] LoadBalancerRR: Removing endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics\nI0923 12:58:19.014021   79946 roundrobin.go:298] LoadBalancerRR: Removing endpoints for openshift-cloud-credential-operator/controller-manager-service:\nI0923 12:58:19.056989   79946 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:19.057005   79946 proxier.go:350] userspace syncProxyRules took 24.461736ms\nI0923 12:58:19.159133   79946 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:19.159152   79946 proxier.go:350] userspace syncProxyRules took 24.594894ms\nI0923 12:58:20.012514   79946 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics to [10.129.0.53:2112]\nI0923 12:58:20.012625   79946 roundrobin.go:218] Delete endpoint 10.129.0.53:2112 for service "openshift-cloud-credential-operator/cco-metrics:cco-metrics"\nI0923 12:58:20.012683   79946 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/controller-manager-service: to [10.129.0.53:443]\nI0923 12:58:20.012704   79946 roundrobin.go:218] Delete endpoint 10.129.0.53:443 for service "openshift-cloud-credential-operator/controller-manager-service:"\nI0923 12:58:20.121458   79946 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:20.121477   79946 proxier.go:350] userspace syncProxyRules took 28.234942ms\nI0923 12:58:20.223815   79946 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:20.223833   79946 proxier.go:350] userspace syncProxyRules took 24.645605ms\nF0923 12:58:30.922230   79946 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 23 12:58:50.711 E ns/openshift-sdn pod/ovs-hxwsk node/ip-10-0-142-97.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): :37.909Z|00204|connmgr|INFO|br0<->unix#1195: 1 flow_mods in the last 0 s (1 adds)\n2020-09-23T12:57:37.937Z|00205|connmgr|INFO|br0<->unix#1198: 3 flow_mods in the last 0 s (3 adds)\n2020-09-23T12:57:37.960Z|00206|connmgr|INFO|br0<->unix#1201: 1 flow_mods in the last 0 s (1 adds)\n2020-09-23T12:58:49.733Z|00207|bridge|INFO|bridge br0: deleted interface veth8ea3c483 on port 31\n2020-09-23T12:58:49.733Z|00208|bridge|INFO|bridge br0: deleted interface vethbc9e45c5 on port 20\n2020-09-23T12:58:49.733Z|00209|bridge|INFO|bridge br0: deleted interface veth483ed9a5 on port 28\n2020-09-23T12:58:49.733Z|00210|bridge|INFO|bridge br0: deleted interface veth9c45317c on port 16\n2020-09-23T12:58:49.733Z|00211|bridge|INFO|bridge br0: deleted interface vethed4ed411 on port 19\n2020-09-23T12:58:49.733Z|00212|bridge|INFO|bridge br0: deleted interface veth653f8ab1 on port 29\n2020-09-23T12:58:49.733Z|00213|bridge|INFO|bridge br0: deleted interface veth3620997d on port 25\n2020-09-23T12:58:49.733Z|00214|bridge|INFO|bridge br0: deleted interface vethd2c76960 on port 24\n2020-09-23T12:58:49.733Z|00215|bridge|INFO|bridge br0: deleted interface tun0 on port 2\n2020-09-23T12:58:49.733Z|00216|bridge|INFO|bridge br0: deleted interface vethc125ad90 on port 30\n2020-09-23T12:58:49.733Z|00217|bridge|INFO|bridge br0: deleted interface veth305a2a64 on port 26\n2020-09-23T12:58:49.733Z|00218|bridge|INFO|bridge br0: deleted interface br0 on port 65534\n2020-09-23T12:58:49.733Z|00219|bridge|INFO|bridge br0: deleted interface vxlan0 on port 1\n2020-09-23T12:58:49.733Z|00220|bridge|INFO|bridge br0: deleted interface vetha23f2248 on port 27\n2020-09-23T12:58:49.733Z|00221|bridge|INFO|bridge br0: deleted interface veth4eed62ed on port 22\n2020-09-23T12:58:49.733Z|00222|bridge|INFO|bridge br0: deleted interface vethe0ffd15d on port 21\n2020-09-23T12:58:49.733Z|00223|bridge|INFO|bridge br0: deleted interface veth691279c2 on port 3\n2020-09-23 12:58:49 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 23 12:58:58.738 E ns/openshift-sdn pod/sdn-dgzp7 node/ip-10-0-142-97.us-west-2.compute.internal container=sdn container exited with code 255 (Error): cessing 0 service events\nI0923 12:58:19.230368   63455 proxier.go:350] userspace syncProxyRules took 28.375133ms\nI0923 12:58:20.010456   63455 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics to [10.129.0.53:2112]\nI0923 12:58:20.010501   63455 roundrobin.go:218] Delete endpoint 10.129.0.53:2112 for service "openshift-cloud-credential-operator/cco-metrics:cco-metrics"\nI0923 12:58:20.010558   63455 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/controller-manager-service: to [10.129.0.53:443]\nI0923 12:58:20.010574   63455 roundrobin.go:218] Delete endpoint 10.129.0.53:443 for service "openshift-cloud-credential-operator/controller-manager-service:"\nI0923 12:58:20.130455   63455 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:20.130486   63455 proxier.go:350] userspace syncProxyRules took 29.791717ms\nI0923 12:58:20.246569   63455 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:20.246593   63455 proxier.go:350] userspace syncProxyRules took 27.920422ms\nI0923 12:58:50.373193   63455 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:50.373220   63455 proxier.go:350] userspace syncProxyRules took 27.749891ms\nI0923 12:58:57.651497   63455 ovs.go:180] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nI0923 12:58:57.982111   63455 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-240/service-test: to [10.128.2.14:80]\nI0923 12:58:57.982150   63455 roundrobin.go:218] Delete endpoint 10.131.0.20:80 for service "e2e-k8s-service-lb-available-240/service-test:"\nI0923 12:58:58.099104   63455 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:58.099138   63455 proxier.go:350] userspace syncProxyRules took 27.113879ms\nF0923 12:58:58.174784   63455 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 23 12:59:14.387 E ns/openshift-sdn pod/ovs-5swp2 node/ip-10-0-157-63.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): #987: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T12:55:21.657Z|00161|bridge|INFO|bridge br0: deleted interface veth8f41ffb4 on port 3\n2020-09-23T12:55:28.376Z|00162|bridge|INFO|bridge br0: added interface vethb6c1d682 on port 26\n2020-09-23T12:55:28.424Z|00163|connmgr|INFO|br0<->unix#995: 5 flow_mods in the last 0 s (5 adds)\n2020-09-23T12:55:28.490Z|00164|connmgr|INFO|br0<->unix#998: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T12:55:28.680Z|00165|bridge|INFO|bridge br0: added interface vetha61e5a6d on port 27\n2020-09-23T12:55:28.719Z|00166|connmgr|INFO|br0<->unix#1001: 5 flow_mods in the last 0 s (5 adds)\n2020-09-23T12:55:28.766Z|00167|connmgr|INFO|br0<->unix#1004: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T12:58:20.959Z|00168|connmgr|INFO|br0<->unix#1129: 2 flow_mods in the last 0 s (2 adds)\n2020-09-23T12:58:21.008Z|00169|connmgr|INFO|br0<->unix#1133: 1 flow_mods in the last 0 s (1 adds)\n2020-09-23T12:58:21.183Z|00170|connmgr|INFO|br0<->unix#1141: 3 flow_mods in the last 0 s (3 adds)\n2020-09-23T12:58:21.217Z|00171|connmgr|INFO|br0<->unix#1144: 1 flow_mods in the last 0 s (1 adds)\n2020-09-23T12:58:21.240Z|00172|connmgr|INFO|br0<->unix#1147: 3 flow_mods in the last 0 s (3 adds)\n2020-09-23T12:58:21.266Z|00173|connmgr|INFO|br0<->unix#1150: 1 flow_mods in the last 0 s (1 adds)\n2020-09-23T12:58:21.291Z|00174|connmgr|INFO|br0<->unix#1153: 3 flow_mods in the last 0 s (3 adds)\n2020-09-23T12:58:21.319Z|00175|connmgr|INFO|br0<->unix#1156: 1 flow_mods in the last 0 s (1 adds)\n2020-09-23T12:58:21.350Z|00176|connmgr|INFO|br0<->unix#1159: 3 flow_mods in the last 0 s (3 adds)\n2020-09-23T12:58:21.374Z|00177|connmgr|INFO|br0<->unix#1162: 1 flow_mods in the last 0 s (1 adds)\n2020-09-23T12:58:21.397Z|00178|connmgr|INFO|br0<->unix#1165: 3 flow_mods in the last 0 s (3 adds)\n2020-09-23T12:58:21.428Z|00179|connmgr|INFO|br0<->unix#1168: 1 flow_mods in the last 0 s (1 adds)\n2020-09-23 12:59:13 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 23 12:59:16.736 E ns/openshift-multus pod/multus-7gmfz node/ip-10-0-147-217.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 23 12:59:17.414 E ns/openshift-sdn pod/sdn-m47mt node/ip-10-0-157-63.us-west-2.compute.internal container=sdn container exited with code 255 (Error): 734 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0923 12:58:21.517124   63734 cmd.go:173] openshift-sdn network plugin registering startup\nI0923 12:58:21.517225   63734 cmd.go:177] openshift-sdn network plugin ready\nI0923 12:58:51.268293   63734 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:51.268318   63734 proxier.go:350] userspace syncProxyRules took 27.236167ms\nI0923 12:58:57.982725   63734 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-240/service-test: to [10.128.2.14:80]\nI0923 12:58:57.982775   63734 roundrobin.go:218] Delete endpoint 10.131.0.20:80 for service "e2e-k8s-service-lb-available-240/service-test:"\nI0923 12:58:58.100468   63734 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:58:58.100493   63734 proxier.go:350] userspace syncProxyRules took 28.66802ms\nI0923 12:58:59.980022   63734 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-240/service-test: to [10.128.2.14:80 10.131.0.20:80]\nI0923 12:58:59.980063   63734 roundrobin.go:218] Delete endpoint 10.131.0.20:80 for service "e2e-k8s-service-lb-available-240/service-test:"\nI0923 12:59:00.100263   63734 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:59:00.100292   63734 proxier.go:350] userspace syncProxyRules took 27.810855ms\nI0923 12:59:08.119590   63734 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.60:6443 10.129.0.64:6443 10.130.0.73:6443]\nI0923 12:59:08.119650   63734 roundrobin.go:218] Delete endpoint 10.129.0.64:6443 for service "openshift-multus/multus-admission-controller:"\nI0923 12:59:08.249194   63734 proxier.go:371] userspace proxy: processing 0 service events\nI0923 12:59:08.249217   63734 proxier.go:350] userspace syncProxyRules took 27.540067ms\nF0923 12:59:16.365924   63734 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Sep 23 13:00:08.602 E ns/openshift-multus pod/multus-bm7pj node/ip-10-0-157-63.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 23 13:00:58.451 E ns/openshift-multus pod/multus-687vg node/ip-10-0-131-130.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 23 13:01:51.377 E ns/openshift-multus pod/multus-mm5dw node/ip-10-0-142-66.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Sep 23 13:04:27.402 E ns/openshift-machine-config-operator pod/machine-config-daemon-mcf7v node/ip-10-0-142-97.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 23 13:04:42.540 E ns/openshift-machine-config-operator pod/machine-config-daemon-hs8sj node/ip-10-0-147-217.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 23 13:05:02.731 E ns/openshift-machine-config-operator pod/machine-config-daemon-d9882 node/ip-10-0-142-66.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 23 13:05:26.051 E ns/openshift-machine-config-operator pod/machine-config-daemon-47hsq node/ip-10-0-131-130.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 23 13:05:43.687 E ns/openshift-machine-config-operator pod/machine-config-controller-84d4d44f7f-jfm4q node/ip-10-0-147-217.us-west-2.compute.internal container=machine-config-controller container exited with code 2 (Error): tch of *v1.MachineConfigPool ended with: too old resource version: 18194 (23447)\nW0923 12:54:01.101798       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 17549 (23461)\nW0923 12:54:01.272930       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 17549 (23475)\nW0923 12:54:01.355101       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.KubeletConfig ended with: too old resource version: 18190 (23475)\nW0923 12:54:01.430223       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 18194 (23477)\nI0923 12:54:01.631424       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool master\nI0923 12:54:01.679857       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool worker\nW0923 12:54:23.369175       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 23346 (24336)\nW0923 12:54:26.348509       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 24362 (24485)\nW0923 13:00:56.282482       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 28750 (28955)\nW0923 13:00:59.192175       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 28955 (28974)\n
Sep 23 13:07:19.947 E ns/openshift-machine-config-operator pod/machine-config-server-mbdp4 node/ip-10-0-147-217.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0923 12:29:31.221449       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-12-g747de90f-dirty (747de90fbfb379582694160dcc1181734c795695)\nI0923 12:29:31.222334       1 api.go:56] Launching server on :22624\nI0923 12:29:31.222382       1 api.go:56] Launching server on :22623\nI0923 12:35:11.302821       1 api.go:102] Pool worker requested by 10.0.131.206:36326\n
Sep 23 13:07:22.456 E ns/openshift-machine-config-operator pod/machine-config-server-lcxrc node/ip-10-0-132-196.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0923 12:29:27.619583       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-12-g747de90f-dirty (747de90fbfb379582694160dcc1181734c795695)\nI0923 12:29:27.620459       1 api.go:56] Launching server on :22624\nI0923 12:29:27.620488       1 api.go:56] Launching server on :22623\nI0923 12:35:06.093122       1 api.go:102] Pool worker requested by 10.0.131.206:6303\n
Sep 23 13:07:25.335 E ns/openshift-machine-config-operator pod/machine-config-server-b8ph6 node/ip-10-0-131-130.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0923 12:29:28.379346       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-12-g747de90f-dirty (747de90fbfb379582694160dcc1181734c795695)\nI0923 12:29:28.380160       1 api.go:56] Launching server on :22624\nI0923 12:29:28.380253       1 api.go:56] Launching server on :22623\nI0923 12:35:09.045327       1 api.go:102] Pool worker requested by 10.0.159.23:15708\n
Sep 23 13:07:30.900 E ns/openshift-monitoring pod/kube-state-metrics-5fc947c479-2l7xf node/ip-10-0-142-97.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Sep 23 13:07:32.468 E ns/openshift-authentication-operator pod/authentication-operator-bd89cfd7c-bqlbd node/ip-10-0-131-130.us-west-2.compute.internal container=operator container exited with code 255 (Error):       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 21676 (23427)\nW0923 12:56:24.586963       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 25559 (26194)\nW0923 12:56:24.587013       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 25559 (26194)\nW0923 12:56:24.587063       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 25559 (26194)\nW0923 12:56:24.587111       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 25559 (26194)\nW0923 12:56:24.587150       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 21522 (23421)\nW0923 12:56:24.587224       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 21673 (24487)\nW0923 12:56:24.587325       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 21675 (24461)\nW0923 12:56:24.587341       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 21522 (21961)\nW0923 13:00:56.246673       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 28838 (28954)\nI0923 13:07:31.596610       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0923 13:07:31.596660       1 leaderelection.go:66] leaderelection lost\n
Sep 23 13:07:33.536 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-55bb9b9785-pr4kk node/ip-10-0-131-130.us-west-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): o:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"df81991b-aa5b-40aa-b80d-897b7c9db833", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'OpenShiftAPICheckFailed' "user.openshift.io.v1" failed with HTTP status code 503 (the server is currently unable to handle the request)\nI0923 12:58:03.861989       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"df81991b-aa5b-40aa-b80d-897b7c9db833", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("Available: \"build.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"image.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"project.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"user.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)")\nI0923 12:58:23.063339       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"df81991b-aa5b-40aa-b80d-897b7c9db833", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nW0923 13:00:56.247172       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 28838 (28954)\nI0923 13:07:32.789139       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0923 13:07:32.789183       1 leaderelection.go:66] leaderelection lost\n
Sep 23 13:07:51.126 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-66.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-09-23T13:07:44.171Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-23T13:07:44.179Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-23T13:07:44.180Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-23T13:07:44.181Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-23T13:07:44.181Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-09-23T13:07:44.181Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-23T13:07:44.181Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-23T13:07:44.181Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-23T13:07:44.181Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-23T13:07:44.181Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-23T13:07:44.181Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-23T13:07:44.181Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-23T13:07:44.181Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-09-23T13:07:44.181Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-23T13:07:44.182Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-23T13:07:44.182Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-09-23
Sep 23 13:08:13.277 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: reconciling Grafana CA bundle ConfigMap failed: updating ConfigMap object failed: Put https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/grafana-trusted-ca-bundle-d34s91lhv300e: unexpected EOF
Sep 23 13:08:17.034 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:09:05.960 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Sep 23 13:10:12.579 E ns/openshift-monitoring pod/node-exporter-7948q node/ip-10-0-142-97.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 9-23T12:55:01Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:01Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 23 13:10:12.593 E ns/openshift-cluster-node-tuning-operator pod/tuned-6z4z4 node/ip-10-0-142-97.us-west-2.compute.internal container=tuned container exited with code 143 (Error): d (openshift-machine-config-operator/machine-config-daemon-mcf7v) labels changed node wide: true\nI0923 13:04:40.390256   61247 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:04:40.395036   61247 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:04:40.580668   61247 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 13:07:29.025438   61247 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-9560/dp-657fc4b57d-lzbxm) labels changed node wide: true\nI0923 13:07:30.384295   61247 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:07:30.387600   61247 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:07:30.595029   61247 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 13:07:36.719743   61247 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0923 13:07:40.382459   61247 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:07:40.385217   61247 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:07:40.498900   61247 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 13:08:00.867392   61247 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-5031/foo-7cfr7) labels changed node wide: true\nI0923 13:08:05.382440   61247 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:08:05.383994   61247 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:08:05.497475   61247 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 13:08:26.715496   61247 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-240/service-test-njsw2) labels changed node wide: true\n
Sep 23 13:10:12.638 E ns/openshift-multus pod/multus-dfpnq node/ip-10-0-142-97.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Sep 23 13:10:12.660 E ns/openshift-sdn pod/ovs-pbfz6 node/ip-10-0-142-97.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): :07:30.568Z|00164|connmgr|INFO|br0<->unix#529: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:07:30.594Z|00165|bridge|INFO|bridge br0: deleted interface veth305a2a64 on port 16\n2020-09-23T13:07:30.640Z|00166|connmgr|INFO|br0<->unix#532: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:07:30.057Z|00017|jsonrpc|WARN|Dropped 2 log messages in last 510 seconds (most recently, 510 seconds ago) due to excessive rate\n2020-09-23T13:07:30.057Z|00018|jsonrpc|WARN|unix#449: receive error: Connection reset by peer\n2020-09-23T13:07:30.057Z|00019|reconnect|WARN|unix#449: connection dropped (Connection reset by peer)\n2020-09-23T13:07:30.786Z|00167|connmgr|INFO|br0<->unix#537: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:07:30.818Z|00168|bridge|INFO|bridge br0: deleted interface veth483ed9a5 on port 15\n2020-09-23T13:07:30.862Z|00169|connmgr|INFO|br0<->unix#541: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:07:30.902Z|00170|connmgr|INFO|br0<->unix#544: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:07:30.935Z|00171|bridge|INFO|bridge br0: deleted interface veth8ea3c483 on port 13\n2020-09-23T13:07:30.803Z|00020|jsonrpc|WARN|unix#473: receive error: Connection reset by peer\n2020-09-23T13:07:30.803Z|00021|reconnect|WARN|unix#473: connection dropped (Connection reset by peer)\n2020-09-23T13:07:59.464Z|00172|connmgr|INFO|br0<->unix#563: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:07:59.500Z|00173|connmgr|INFO|br0<->unix#566: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:07:59.539Z|00174|bridge|INFO|bridge br0: deleted interface vethbc9e45c5 on port 5\n2020-09-23T13:08:14.529Z|00175|connmgr|INFO|br0<->unix#581: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:08:14.557Z|00176|connmgr|INFO|br0<->unix#584: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:08:14.578Z|00177|bridge|INFO|bridge br0: deleted interface vethe0ffd15d on port 6\n2020-09-23 13:08:27 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 23 13:10:12.673 E ns/openshift-machine-config-operator pod/machine-config-daemon-lgd8w node/ip-10-0-142-97.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 23 13:10:21.750 E ns/openshift-machine-config-operator pod/machine-config-daemon-lgd8w node/ip-10-0-142-97.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 23 13:10:41.648 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-66.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/09/23 12:55:54 Watching directory: "/etc/alertmanager/config"\n
Sep 23 13:10:41.648 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-66.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/09/23 12:55:55 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/23 12:55:55 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/23 12:55:55 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/23 12:55:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/09/23 12:55:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/23 12:55:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/09/23 12:55:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/23 12:55:55 http.go:106: HTTPS: listening on [::]:9095\n2020/09/23 12:57:32 reverseproxy.go:447: http: proxy error: context canceled\n
Sep 23 13:10:41.663 E ns/openshift-monitoring pod/kube-state-metrics-5fc947c479-knqkb node/ip-10-0-142-66.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Sep 23 13:10:41.720 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-66.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/09/23 13:07:50 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 23 13:10:41.720 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-66.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/09/23 13:07:50 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/23 13:07:50 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/23 13:07:50 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/23 13:07:50 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/23 13:07:50 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/23 13:07:50 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/23 13:07:50 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/23 13:07:50 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/23 13:07:50 http.go:106: HTTPS: listening on [::]:9091\n
Sep 23 13:10:41.720 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-66.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-09-23T13:07:49.849858353Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-09-23T13:07:49.850015739Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-09-23T13:07:49.851471953Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-09-23T13:07:54.982703362Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Sep 23 13:10:46.499 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-130.us-west-2.compute.internal" not ready since 2020-09-23 13:08:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Sep 23 13:10:46.499 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-130.us-west-2.compute.internal" not ready since 2020-09-23 13:08:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Sep 23 13:10:46.503 E clusteroperator/kube-controller-manager changed Degraded to True: NodeControllerDegradedMasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-130.us-west-2.compute.internal" not ready since 2020-09-23 13:08:40 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Sep 23 13:10:52.349 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error):            ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n '' ']'\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml\nI0923 12:51:57.478049       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0923 12:51:57.479320       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0923 12:51:57.479374       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0923 12:56:26.694456       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:45.274764       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Sep 23 13:10:52.349 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:06:53.622961       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:06:53.623236       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:07:01.908510       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:07:01.908759       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:07:11.916508       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:07:11.916714       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:07:21.922706       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:07:21.923036       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:07:31.931128       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:07:31.931450       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:07:41.940468       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:07:41.940730       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:07:51.948486       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:07:51.948732       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:08:01.957338       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:08:01.957649       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 23 13:10:52.349 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): er="apiserver-loopback-client-ca@1600865795" (2020-09-23 11:56:34 +0000 UTC to 2021-09-23 11:56:34 +0000 UTC (now=2020-09-23 12:56:35.319719124 +0000 UTC))\nI0923 12:56:35.319769       1 secure_serving.go:178] Serving securely on [::]:10257\nI0923 12:56:35.319798       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0923 12:56:35.320607       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nE0923 12:56:35.320999       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:35.790476       1 webhook.go:107] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:35.790518       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0923 12:56:41.473660       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:44.959616       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0923 12:56:52.506314       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Sep 23 13:10:52.373 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): 2020-09-23 12:56:53.609977448 +0000 UTC))\nI0923 12:56:53.610007       1 tlsconfig.go:179] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-09-23 12:11:01 +0000 UTC to 2030-09-21 12:11:01 +0000 UTC (now=2020-09-23 12:56:53.609996193 +0000 UTC))\nI0923 12:56:53.610026       1 tlsconfig.go:179] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file"]: "kube-csr-signer_@1600864130" [] issuer="kubelet-signer" (2020-09-23 12:28:49 +0000 UTC to 2020-09-24 12:11:05 +0000 UTC (now=2020-09-23 12:56:53.610014801 +0000 UTC))\nI0923 12:56:53.611682       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1600864133" (2020-09-23 12:29:04 +0000 UTC to 2022-09-23 12:29:05 +0000 UTC (now=2020-09-23 12:56:53.611662331 +0000 UTC))\nI0923 12:56:53.612147       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1600865796" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1600865796" (2020-09-23 11:56:35 +0000 UTC to 2021-09-23 11:56:35 +0000 UTC (now=2020-09-23 12:56:53.612130257 +0000 UTC))\nI0923 12:56:53.612243       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1600865796" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1600865796" (2020-09-23 11:56:35 +0000 UTC to 2021-09-23 11:56:35 +0000 UTC (now=2020-09-23 12:56:53.612230143 +0000 UTC))\nI0923 12:56:53.709949       1 leaderelection.go:241] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n
Sep 23 13:10:52.452 E ns/openshift-monitoring pod/node-exporter-f2rwb node/ip-10-0-131-130.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 9-23T12:55:08Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:08Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 23 13:10:52.469 E ns/openshift-cluster-node-tuning-operator pod/tuned-r8jb5 node/ip-10-0-131-130.us-west-2.compute.internal container=tuned container exited with code 143 (Error): s will not trigger profile reload.\nI0923 13:07:33.782257   76496 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-131-130.us-west-2.compute.internal) labels changed node wide: false\nI0923 13:07:33.988958   76496 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-7-ip-10-0-131-130.us-west-2.compute.internal) labels changed node wide: false\nI0923 13:07:34.182313   76496 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-6-ip-10-0-131-130.us-west-2.compute.internal) labels changed node wide: false\nI0923 13:07:34.378190   76496 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-5-ip-10-0-131-130.us-west-2.compute.internal) labels changed node wide: true\nI0923 13:07:38.605999   76496 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:07:38.607320   76496 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:07:38.706324   76496 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 13:07:41.958808   76496 openshift-tuned.go:550] Pod (openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator-764b59ff47-f7wx4) labels changed node wide: true\nI0923 13:07:43.606002   76496 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:07:43.607122   76496 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:07:43.706290   76496 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 13:08:01.961891   76496 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-946b65b75-774cv) labels changed node wide: true\n2020-09-23 13:08:02,862 INFO     tuned.daemon.controller: terminating controller\n2020-09-23 13:08:02,862 INFO     tuned.daemon.daemon: stopping tuning\nI0923 13:08:02.866422   76496 openshift-tuned.go:137] Received signal: terminated\n
Sep 23 13:10:52.482 E ns/openshift-sdn pod/sdn-controller-9wvpw node/ip-10-0-131-130.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0923 12:57:25.571437       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Sep 23 13:10:52.506 E ns/openshift-controller-manager pod/controller-manager-5pqlc node/ip-10-0-131-130.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Sep 23 13:10:52.549 E ns/openshift-sdn pod/ovs-8x88c node/ip-10-0-131-130.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): 571: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:07:30.803Z|00170|connmgr|INFO|br0<->unix#574: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:07:30.831Z|00171|bridge|INFO|bridge br0: deleted interface vethb59f8930 on port 7\n2020-09-23T13:07:31.215Z|00172|connmgr|INFO|br0<->unix#577: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:07:31.242Z|00173|connmgr|INFO|br0<->unix#580: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:07:31.262Z|00174|bridge|INFO|bridge br0: deleted interface veth1476136e on port 12\n2020-09-23T13:07:31.848Z|00175|connmgr|INFO|br0<->unix#583: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:07:31.876Z|00176|connmgr|INFO|br0<->unix#586: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:07:31.896Z|00177|bridge|INFO|bridge br0: deleted interface veth55a3123a on port 8\n2020-09-23T13:07:32.396Z|00178|connmgr|INFO|br0<->unix#589: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:07:32.430Z|00179|connmgr|INFO|br0<->unix#592: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:07:32.456Z|00180|bridge|INFO|bridge br0: deleted interface veth9dba59c5 on port 5\n2020-09-23T13:07:32.982Z|00181|connmgr|INFO|br0<->unix#598: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:07:33.009Z|00182|connmgr|INFO|br0<->unix#601: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:07:33.031Z|00183|bridge|INFO|bridge br0: deleted interface vethc5692058 on port 17\n2020-09-23T13:07:58.754Z|00009|jsonrpc|WARN|unix#544: receive error: Connection reset by peer\n2020-09-23T13:07:58.754Z|00010|reconnect|WARN|unix#544: connection dropped (Connection reset by peer)\n2020-09-23T13:07:58.707Z|00184|connmgr|INFO|br0<->unix#622: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:07:58.739Z|00185|connmgr|INFO|br0<->unix#625: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:07:58.760Z|00186|bridge|INFO|bridge br0: deleted interface veth0fb8f1ed on port 13\n2020-09-23 13:08:02 info: Saving flows ...\n2020-09-23T13:08:02Z|00001|fatal_signal|WARN|terminating with signal 15 (Terminated)\n
Sep 23 13:10:52.561 E ns/openshift-multus pod/multus-admission-controller-pt5vn node/ip-10-0-131-130.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Sep 23 13:10:52.597 E ns/openshift-multus pod/multus-566hd node/ip-10-0-131-130.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Sep 23 13:10:52.616 E ns/openshift-machine-config-operator pod/machine-config-daemon-lkvmk node/ip-10-0-131-130.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 23 13:10:52.661 E ns/openshift-machine-config-operator pod/machine-config-server-j4wvn node/ip-10-0-131-130.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0923 13:07:27.063420       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-12-g747de90f-dirty (747de90fbfb379582694160dcc1181734c795695)\nI0923 13:07:27.064367       1 api.go:56] Launching server on :22624\nI0923 13:07:27.064438       1 api.go:56] Launching server on :22623\n
Sep 23 13:10:57.343 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): required revision has been compacted\nE0923 13:08:02.705500       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:08:02.707054       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:08:02.707415       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:08:02.708064       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:08:02.708299       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0923 13:08:02.815753       1 controller.go:606] quota admission added evaluator for: prometheuses.monitoring.coreos.com\nI0923 13:08:02.815794       1 controller.go:606] quota admission added evaluator for: prometheuses.monitoring.coreos.com\nI0923 13:08:02.897249       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0923 13:08:02.897238       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nW0923 13:08:02.962237       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.132.196 10.0.147.217]\nI0923 13:08:02.994140       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0923 13:08:02.996089       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0923 13:08:02.996231       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\n
Sep 23 13:10:57.343 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0923 12:55:17.192260       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Sep 23 13:10:57.343 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal node/ip-10-0-131-130.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0923 13:06:53.562489       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:06:53.562754       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0923 13:06:53.767046       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:06:53.767255       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 23 13:10:58.469 E ns/openshift-multus pod/multus-566hd node/ip-10-0-131-130.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 23 13:11:00.379 E ns/openshift-multus pod/multus-566hd node/ip-10-0-131-130.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 23 13:11:02.598 E ns/openshift-machine-config-operator pod/machine-config-daemon-lkvmk node/ip-10-0-131-130.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 23 13:11:04.757 E ns/openshift-multus pod/multus-566hd node/ip-10-0-131-130.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 23 13:11:09.844 E ns/openshift-console pod/console-5f894f5698-84zjn node/ip-10-0-147-217.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/09/23 12:56:17 cmd/main: cookies are secure!\n2020/09/23 12:56:17 cmd/main: Binding to [::]:8443...\n2020/09/23 12:56:17 cmd/main: using TLS\n2020/09/23 12:57:12 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n2020/09/23 12:57:17 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Sep 23 13:11:09.884 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-55bb9b9785-n4xlk node/ip-10-0-147-217.us-west-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): e  openshift-apiserver-operator/openshift-apiserver-operator-lock...\nI0923 13:08:55.232436       1 leaderelection.go:251] successfully acquired lease openshift-apiserver-operator/openshift-apiserver-operator-lock\nI0923 13:08:55.232615       1 event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"c4bc053a-3cf2-414d-b8fd-4292358dc987", APIVersion:"v1", ResourceVersion:"32523", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 7e0e79f7-5cb2-4551-b1b7-9bd8b60b1ffa became leader\nI0923 13:08:55.244115       1 logging_controller.go:82] Starting LogLevelController\nI0923 13:08:55.245687       1 workload_controller.go:185] Starting OpenShiftAPIServerOperator\nI0923 13:08:55.246315       1 config_observer_controller.go:148] Starting ConfigObserver\nI0923 13:08:55.247285       1 prune_controller.go:221] Starting PruneController\nI0923 13:08:55.247297       1 revision_controller.go:336] Starting RevisionController\nI0923 13:08:55.247650       1 status_controller.go:198] Starting StatusSyncer-openshift-apiserver\nI0923 13:08:55.247676       1 finalizer_controller.go:119] Starting FinalizerController\nI0923 13:08:55.247766       1 resourcesync_controller.go:217] Starting ResourceSyncController\nI0923 13:08:55.247823       1 condition_controller.go:191] Starting EncryptionConditionController\nI0923 13:08:55.247833       1 unsupportedconfigoverrides_controller.go:151] Starting UnsupportedConfigOverridesController\nI0923 13:08:55.248005       1 state_controller.go:160] Starting EncryptionStateController\nI0923 13:08:55.248008       1 prune_controller.go:193] Starting EncryptionPruneController\nI0923 13:08:55.249099       1 key_controller.go:352] Starting EncryptionKeyController\nI0923 13:08:55.249358       1 migration_controller.go:316] Starting EncryptionMigrationController\nI0923 13:11:08.536604       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0923 13:11:08.536650       1 leaderelection.go:66] leaderelection lost\n
Sep 23 13:11:10.956 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5f8b64b9cd-hnj6k node/ip-10-0-147-217.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): } {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\\nI0923 13:06:53.767046       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0923 13:06:53.767255       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\\n\"\nStaticPodsDegraded: nodes/ip-10-0-131-130.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal container=\"kube-apiserver-insecure-readyz-7\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-131-130.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal container=\"kube-apiserver-insecure-readyz-7\" is terminated: \"Error\" - \"I0923 12:55:17.192260       1 readyz.go:103] Listening on 0.0.0.0:6080\\n\"\nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-131-130.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-131-130.us-west-2.compute.internal container=\"kube-apiserver-7\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0923 13:11:09.621944       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0923 13:11:09.622003       1 leaderelection.go:66] leaderelection lost\n
Sep 23 13:11:12.009 E ns/openshift-machine-config-operator pod/machine-config-controller-66fc4b9bbd-dnhtc node/ip-10-0-147-217.us-west-2.compute.internal container=machine-config-controller container exited with code 2 (Error): nternal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-9b2778e914bf592704eb5d2c717487d1\nI0923 13:10:32.870809       1 node_controller.go:452] Pool worker: node ip-10-0-142-66.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0923 13:10:32.883368       1 node_controller.go:433] Pool worker: node ip-10-0-142-66.us-west-2.compute.internal is now reporting unready: node ip-10-0-142-66.us-west-2.compute.internal is reporting Unschedulable\nI0923 13:10:52.120765       1 node_controller.go:433] Pool master: node ip-10-0-131-130.us-west-2.compute.internal is now reporting unready: node ip-10-0-131-130.us-west-2.compute.internal is reporting NotReady=False\nI0923 13:11:01.681213       1 node_controller.go:433] Pool master: node ip-10-0-131-130.us-west-2.compute.internal is now reporting unready: node ip-10-0-131-130.us-west-2.compute.internal is reporting Unschedulable\nI0923 13:11:02.847903       1 node_controller.go:442] Pool master: node ip-10-0-131-130.us-west-2.compute.internal has completed update to rendered-master-51aeeb923bd7be7291b0154e021f1e8e\nI0923 13:11:02.861253       1 node_controller.go:435] Pool master: node ip-10-0-131-130.us-west-2.compute.internal is now reporting ready\nI0923 13:11:06.681678       1 node_controller.go:758] Setting node ip-10-0-147-217.us-west-2.compute.internal to desired config rendered-master-51aeeb923bd7be7291b0154e021f1e8e\nI0923 13:11:06.701927       1 node_controller.go:452] Pool master: node ip-10-0-147-217.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-51aeeb923bd7be7291b0154e021f1e8e\nI0923 13:11:07.414037       1 node_controller.go:452] Pool master: node ip-10-0-147-217.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0923 13:11:07.433458       1 node_controller.go:433] Pool master: node ip-10-0-147-217.us-west-2.compute.internal is now reporting unready: node ip-10-0-147-217.us-west-2.compute.internal is reporting Unschedulable\n
Sep 23 13:11:12.990 E ns/openshift-machine-config-operator pod/machine-config-operator-5488bbbd89-jvk7d node/ip-10-0-147-217.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): ] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"37e46580-1918-436c-938c-aa3cfdc5f7f6", ResourceVersion:"30245", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736460915, loc:(*time.Location)(0x271c960)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-5488bbbd89-jvk7d_6429e183-b3a2-4ce8-aa91-e84bdb56767c\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-09-23T13:04:25Z\",\"renewTime\":\"2020-09-23T13:04:25Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-5488bbbd89-jvk7d_6429e183-b3a2-4ce8-aa91-e84bdb56767c became leader'\nI0923 13:04:25.727751       1 leaderelection.go:251] successfully acquired lease openshift-machine-config-operator/machine-config\nI0923 13:04:26.149039       1 operator.go:246] Starting MachineConfigOperator\nI0923 13:04:26.153602       1 event.go:255] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"82118369-217c-4a63-917e-008f188a4cfd", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 4.3.0-0.ci.test-2020-09-23-120742-ci-op-k1p58lb0}] to [{operator 4.3.0-0.ci.test-2020-09-23-121028-ci-op-k1p58lb0}]\n
Sep 23 13:11:14.018 E ns/openshift-machine-api pod/machine-api-operator-7475cdd8c8-hchp2 node/ip-10-0-147-217.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Sep 23 13:11:14.977 E ns/openshift-service-ca pod/apiservice-cabundle-injector-7797cd6f4f-mr2pp node/ip-10-0-147-217.us-west-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Sep 23 13:12:02.034 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:13:17.034 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:13:22.530 E ns/openshift-cluster-node-tuning-operator pod/tuned-hffcq node/ip-10-0-132-196.us-west-2.compute.internal container=tuned container exited with code 143 (Error): t-control-plane) match.  Label changes will not trigger profile reload.\nI0923 13:09:47.345870   78711 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-946b65b75-l4hh8) labels changed node wide: true\nI0923 13:09:48.672092   78711 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:09:48.673952   78711 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:09:48.783877   78711 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 13:11:13.189668   78711 openshift-tuned.go:550] Pod (openshift-machine-api/machine-api-operator-7475cdd8c8-kvstf) labels changed node wide: true\nI0923 13:11:13.672068   78711 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:11:13.674171   78711 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:11:13.837279   78711 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 13:11:32.371159   78711 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/packageserver-c6c7c55cb-b4rpd) labels changed node wide: true\nI0923 13:11:33.672080   78711 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:11:33.673583   78711 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:11:33.812587   78711 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 13:11:34.196324   78711 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/packageserver-5994689bc8-7wbbc) labels changed node wide: true\nI0923 13:11:38.195111   78711 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0923 13:11:38.205444   78711 openshift-tuned.go:881] Pod event watch channel closed.\nI0923 13:11:38.205495   78711 openshift-tuned.go:883] Increasing resyncPeriod to 126\n
Sep 23 13:13:58.824 E ns/openshift-monitoring pod/node-exporter-2k54w node/ip-10-0-147-217.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 9-23T12:55:39Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:39Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 23 13:13:58.839 E ns/openshift-cluster-node-tuning-operator pod/tuned-9cgn4 node/ip-10-0-147-217.us-west-2.compute.internal container=tuned container exited with code 143 (Error): penshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:11:13.612511   74761 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:11:13.760405   74761 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 13:11:13.760889   74761 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-3-ip-10-0-147-217.us-west-2.compute.internal) labels changed node wide: true\nI0923 13:11:18.611087   74761 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:11:18.612741   74761 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:11:18.776814   74761 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 13:11:20.666855   74761 openshift-tuned.go:550] Pod (openshift-authentication-operator/authentication-operator-bd89cfd7c-w8fn8) labels changed node wide: true\nI0923 13:11:23.611082   74761 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:11:23.612235   74761 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:11:23.714418   74761 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 13:11:30.661530   74761 openshift-tuned.go:550] Pod (openshift-machine-config-operator/etcd-quorum-guard-86cc875486-rfpdg) labels changed node wide: true\nI0923 13:11:33.611057   74761 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:11:33.612267   74761 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:11:33.717216   74761 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0923 13:11:37.054822   74761 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-7548fbff57-njlpp) labels changed node wide: true\n
Sep 23 13:13:58.856 E ns/openshift-controller-manager pod/controller-manager-h9r2g node/ip-10-0-147-217.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Sep 23 13:13:58.871 E ns/openshift-sdn pod/sdn-controller-6n5c7 node/ip-10-0-147-217.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0923 12:57:31.305168       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Sep 23 13:13:58.893 E ns/openshift-multus pod/multus-admission-controller-prh88 node/ip-10-0-147-217.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Sep 23 13:13:58.904 E ns/openshift-sdn pod/ovs-9p6nx node/ip-10-0-147-217.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): 4.210Z|00236|bridge|INFO|bridge br0: deleted interface veth0b0e1fb1 on port 22\n2020-09-23T13:11:14.792Z|00237|connmgr|INFO|br0<->unix#876: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:11:14.817Z|00238|connmgr|INFO|br0<->unix#879: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:11:14.840Z|00239|bridge|INFO|bridge br0: deleted interface veth2a5117d2 on port 11\n2020-09-23T13:11:15.030Z|00240|connmgr|INFO|br0<->unix#882: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:11:15.058Z|00241|connmgr|INFO|br0<->unix#885: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:11:15.078Z|00242|bridge|INFO|bridge br0: deleted interface veth9a252a8f on port 10\n2020-09-23T13:11:18.661Z|00018|jsonrpc|WARN|Dropped 3 log messages in last 803 seconds (most recently, 802 seconds ago) due to excessive rate\n2020-09-23T13:11:18.661Z|00019|jsonrpc|WARN|unix#779: receive error: Connection reset by peer\n2020-09-23T13:11:18.661Z|00020|reconnect|WARN|unix#779: connection dropped (Connection reset by peer)\n2020-09-23T13:11:18.608Z|00243|bridge|INFO|bridge br0: added interface veth4a1effa2 on port 29\n2020-09-23T13:11:18.646Z|00244|connmgr|INFO|br0<->unix#891: 5 flow_mods in the last 0 s (5 adds)\n2020-09-23T13:11:18.717Z|00245|connmgr|INFO|br0<->unix#895: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:11:18.721Z|00246|connmgr|INFO|br0<->unix#897: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-09-23T13:11:21.069Z|00247|connmgr|INFO|br0<->unix#900: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:11:21.102Z|00248|connmgr|INFO|br0<->unix#903: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:11:21.125Z|00249|bridge|INFO|bridge br0: deleted interface veth4a1effa2 on port 29\n2020-09-23T13:11:35.354Z|00250|connmgr|INFO|br0<->unix#918: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:11:35.380Z|00251|connmgr|INFO|br0<->unix#921: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:11:35.401Z|00252|bridge|INFO|bridge br0: deleted interface veth9e8c5173 on port 28\n2020-09-23 13:11:38 info: Saving flows ...\n
Sep 23 13:13:58.930 E ns/openshift-multus pod/multus-hb49s node/ip-10-0-147-217.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Sep 23 13:13:58.949 E ns/openshift-machine-config-operator pod/machine-config-daemon-kj892 node/ip-10-0-147-217.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 23 13:13:58.958 E ns/openshift-machine-config-operator pod/machine-config-server-jhjnd node/ip-10-0-147-217.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0923 13:07:21.711846       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-12-g747de90f-dirty (747de90fbfb379582694160dcc1181734c795695)\nI0923 13:07:21.712741       1 api.go:56] Launching server on :22624\nI0923 13:07:21.712769       1 api.go:56] Launching server on :22623\n
Sep 23 13:13:58.994 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error):            ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n 'LISTEN     0      128       [::]:10357                 [::]:*                  ' ']'\n+ sleep 1\n++ ss -Htanop '(' sport = 10357 ')'\n+ '[' -n '' ']'\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml\nI0923 12:53:08.467234       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0923 12:53:08.468608       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0923 12:53:08.468655       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0923 12:54:03.424475       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0923 12:54:19.427960       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Sep 23 13:13:58.994 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:10:22.107466       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:10:22.108286       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:10:32.116462       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:10:32.116852       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:10:42.128085       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:10:42.128375       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:10:52.150000       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:10:52.150359       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:11:02.162759       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:11:02.163503       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:11:12.168938       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:11:12.169381       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:11:22.177799       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:11:22.178344       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:11:32.185181       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:11:32.185477       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 23 13:13:58.994 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): erver-c6c7c55cb", UID:"e51c1fef-d523-493e-adb8-3bf67674a8a6", APIVersion:"apps/v1", ResourceVersion:"34863", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-c6c7c55cb-b4rpd\nI0923 13:11:34.993056       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/telemeter-client: Operation cannot be fulfilled on deployments.apps "telemeter-client": the object has been modified; please apply your changes to the latest version and try again\nI0923 13:11:37.163296       1 replica_set.go:608] Too many replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-58dc98898f, need 0, deleting 1\nI0923 13:11:37.163331       1 replica_set.go:226] Found 5 related ReplicaSets for ReplicaSet openshift-operator-lifecycle-manager/packageserver-58dc98898f: packageserver-5994689bc8, packageserver-58dc98898f, packageserver-c6c7c55cb, packageserver-bd96f8776, packageserver-567cc495d8\nI0923 13:11:37.163415       1 controller_utils.go:602] Controller packageserver-58dc98898f deleting pod openshift-operator-lifecycle-manager/packageserver-58dc98898f-fplt5\nI0923 13:11:37.164140       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"d6719f50-b246-49f2-944d-be71ffd693bd", APIVersion:"apps/v1", ResourceVersion:"34877", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-58dc98898f to 0\nI0923 13:11:37.181664       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-58dc98898f", UID:"605a5ce7-700b-4d8b-a498-95ed98ec88ad", APIVersion:"apps/v1", ResourceVersion:"34923", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-58dc98898f-fplt5\nE0923 13:11:37.186168       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request\n
Sep 23 13:13:59.004 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): easible. Bound node resource: "Capacity: CPU<4>|Memory<15944120Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<14793144Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0923 13:11:20.518962       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-86cc875486-psvx4: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0923 13:11:22.527622       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-86cc875486-psvx4: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0923 13:11:25.527509       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-86cc875486-psvx4: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0923 13:11:30.527466       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-86cc875486-psvx4: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0923 13:11:32.359368       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-c6c7c55cb-b4rpd is bound successfully on node "ip-10-0-132-196.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16116152Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<14965176Ki>|Pods<250>|StorageEphemeral<114381692328>.".\n
Sep 23 13:13:59.042 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): red revision has been compacted\nE0923 13:11:37.847672       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.848488       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.848569       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.848595       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.848704       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.848940       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.848947       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.849061       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.849093       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.849199       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.849302       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.849333       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0923 13:11:37.849893       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0923 13:11:38.003228       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-147-217.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0923 13:11:38.003393       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Sep 23 13:13:59.042 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): .go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0923 13:04:28.734363       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:04:28.734653       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nE0923 13:11:38.192428       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?allowWatchBookmarks=true&resourceVersion=32305&timeout=8m28s&timeoutSeconds=508&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0923 13:11:38.192716       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps?allowWatchBookmarks=true&resourceVersion=32496&timeout=5m34s&timeoutSeconds=334&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Sep 23 13:13:59.042 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-217.us-west-2.compute.internal node/ip-10-0-147-217.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0923 12:52:52.095398       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Sep 23 13:14:06.772 E ns/openshift-multus pod/multus-hb49s node/ip-10-0-147-217.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 23 13:14:09.718 E ns/openshift-machine-config-operator pod/machine-config-daemon-kj892 node/ip-10-0-147-217.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 23 13:14:13.753 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Sep 23 13:14:22.891 E ns/openshift-cluster-machine-approver pod/machine-approver-64967f66d8-sppls node/ip-10-0-132-196.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): 2:54:06.409995       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0923 12:54:06.410034       1 main.go:236] Starting Machine Approver\nI0923 12:54:06.510189       1 main.go:146] CSR csr-cknfq added\nI0923 12:54:06.510214       1 main.go:149] CSR csr-cknfq is already approved\nI0923 12:54:06.510227       1 main.go:146] CSR csr-d2j7v added\nI0923 12:54:06.510231       1 main.go:149] CSR csr-d2j7v is already approved\nI0923 12:54:06.510237       1 main.go:146] CSR csr-flmqc added\nI0923 12:54:06.510240       1 main.go:149] CSR csr-flmqc is already approved\nI0923 12:54:06.510245       1 main.go:146] CSR csr-qd5b9 added\nI0923 12:54:06.510249       1 main.go:149] CSR csr-qd5b9 is already approved\nI0923 12:54:06.510254       1 main.go:146] CSR csr-slcbm added\nI0923 12:54:06.510258       1 main.go:149] CSR csr-slcbm is already approved\nI0923 12:54:06.510262       1 main.go:146] CSR csr-rwz78 added\nI0923 12:54:06.510266       1 main.go:149] CSR csr-rwz78 is already approved\nI0923 12:54:06.510283       1 main.go:146] CSR csr-tngzd added\nI0923 12:54:06.510291       1 main.go:149] CSR csr-tngzd is already approved\nI0923 12:54:06.510297       1 main.go:146] CSR csr-7cmfh added\nI0923 12:54:06.510300       1 main.go:149] CSR csr-7cmfh is already approved\nI0923 12:54:06.510305       1 main.go:146] CSR csr-bb5bl added\nI0923 12:54:06.510309       1 main.go:149] CSR csr-bb5bl is already approved\nI0923 12:54:06.510314       1 main.go:146] CSR csr-dbbj2 added\nI0923 12:54:06.510318       1 main.go:149] CSR csr-dbbj2 is already approved\nI0923 12:54:06.510324       1 main.go:146] CSR csr-j7b4k added\nI0923 12:54:06.510328       1 main.go:149] CSR csr-j7b4k is already approved\nI0923 12:54:06.510332       1 main.go:146] CSR csr-k7h9m added\nI0923 12:54:06.510336       1 main.go:149] CSR csr-k7h9m is already approved\nW0923 13:08:03.732274       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 21914 (32230)\n
Sep 23 13:14:23.958 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-c9948cb5c-s9s7w node/ip-10-0-132-196.us-west-2.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): Map()\nI0923 13:14:02.284731       1 tuned_controller.go:320] syncDaemonSet()\nI0923 13:14:02.415851       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0923 13:14:02.415873       1 status.go:25] syncOperatorStatus()\nI0923 13:14:02.421911       1 tuned_controller.go:188] syncServiceAccount()\nI0923 13:14:02.422017       1 tuned_controller.go:215] syncClusterRole()\nI0923 13:14:02.445044       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0923 13:14:02.469504       1 tuned_controller.go:281] syncClusterConfigMap()\nI0923 13:14:02.472895       1 tuned_controller.go:281] syncClusterConfigMap()\nI0923 13:14:02.475487       1 tuned_controller.go:320] syncDaemonSet()\nI0923 13:14:02.636953       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0923 13:14:02.636981       1 status.go:25] syncOperatorStatus()\nI0923 13:14:02.644872       1 tuned_controller.go:188] syncServiceAccount()\nI0923 13:14:02.644995       1 tuned_controller.go:215] syncClusterRole()\nI0923 13:14:02.667122       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0923 13:14:02.690093       1 tuned_controller.go:281] syncClusterConfigMap()\nI0923 13:14:02.693454       1 tuned_controller.go:281] syncClusterConfigMap()\nI0923 13:14:02.699430       1 tuned_controller.go:320] syncDaemonSet()\nI0923 13:14:05.855065       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0923 13:14:05.855162       1 status.go:25] syncOperatorStatus()\nI0923 13:14:05.877214       1 tuned_controller.go:188] syncServiceAccount()\nI0923 13:14:05.877338       1 tuned_controller.go:215] syncClusterRole()\nI0923 13:14:05.915064       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0923 13:14:05.957897       1 tuned_controller.go:281] syncClusterConfigMap()\nI0923 13:14:05.961403       1 tuned_controller.go:281] syncClusterConfigMap()\nI0923 13:14:05.964851       1 tuned_controller.go:320] syncDaemonSet()\nF0923 13:14:22.665891       1 main.go:82] <nil>\n
Sep 23 13:14:27.035 E ns/openshift-machine-api pod/machine-api-operator-7475cdd8c8-kvstf node/ip-10-0-132-196.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Sep 23 13:14:28.154 E ns/openshift-machine-api pod/machine-api-controllers-75b957844-ptp7z node/ip-10-0-132-196.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Sep 23 13:14:30.339 E ns/openshift-service-ca-operator pod/service-ca-operator-5c99bb9685-d4nd5 node/ip-10-0-132-196.us-west-2.compute.internal container=operator container exited with code 255 (Error): 
Sep 23 13:14:41.338 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-97.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-09-23T13:14:39.896Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-09-23T13:14:39.899Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-09-23T13:14:39.900Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-09-23T13:14:39.901Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-09-23T13:14:39.901Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-09-23T13:14:39.901Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-09-23T13:14:39.901Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-09-23T13:14:39.901Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-09-23T13:14:39.901Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-09-23T13:14:39.901Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-09-23T13:14:39.901Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-09-23T13:14:39.901Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-09-23T13:14:39.901Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-09-23T13:14:39.902Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-09-23T13:14:39.902Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-09-23T13:14:39.902Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-09-23
Sep 23 13:14:45.633 E clusteroperator/monitoring changed Degraded to True: UpdatingprometheusAdapterFailed: Failed to rollout the stack. Error: running task Updating prometheus-adapter failed: failed to load kube-system/extension-apiserver-authentication configmap: rpc error: code = Unavailable desc = transport is closing
Sep 23 13:14:57.647 E ns/openshift-cluster-node-tuning-operator pod/tuned-ldq5v node/ip-10-0-147-217.us-west-2.compute.internal container=tuned container exited with code 143 (Error): Failed to execute operation: Unit file tuned.service does not exist.\nI0923 13:14:05.101944    3839 openshift-tuned.go:209] Extracting tuned profiles\nI0923 13:14:05.107214    3839 openshift-tuned.go:739] Resync period to pull node/pod labels: 62 [s]\nE0923 13:14:08.750061    3839 openshift-tuned.go:881] Get https://172.30.0.1:443/api/v1/nodes/ip-10-0-147-217.us-west-2.compute.internal: dial tcp 172.30.0.1:443: connect: no route to host\nI0923 13:14:08.750182    3839 openshift-tuned.go:883] Increasing resyncPeriod to 124\n
Sep 23 13:14:59.414 E kube-apiserver failed contacting the API: Get https://api.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=37542&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp 44.226.127.131:6443: connect: connection refused
Sep 23 13:14:59.432 E kube-apiserver failed contacting the API: Get https://api.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37635&timeout=5m57s&timeoutSeconds=357&watch=true: dial tcp 44.226.127.131:6443: connect: connection refused
Sep 23 13:15:17.034 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:15:47.034 E openshift-apiserver OpenShift API is not responding to GET requests
Sep 23 13:16:24.530 E ns/openshift-monitoring pod/node-exporter-qdwz6 node/ip-10-0-142-66.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 9-23T12:55:32Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:32Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 23 13:16:24.551 E ns/openshift-sdn pod/ovs-nptd2 node/ip-10-0-142-66.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): 13:10:41.839Z|00187|connmgr|INFO|br0<->unix#791: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:10:41.876Z|00188|connmgr|INFO|br0<->unix#794: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:10:41.933Z|00189|bridge|INFO|bridge br0: deleted interface vetha085a08c on port 17\n2020-09-23T13:10:41.983Z|00190|connmgr|INFO|br0<->unix#797: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:10:42.033Z|00191|connmgr|INFO|br0<->unix#800: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:10:42.064Z|00192|bridge|INFO|bridge br0: deleted interface veth35e6a441 on port 4\n2020-09-23T13:10:42.105Z|00193|connmgr|INFO|br0<->unix#803: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:10:42.139Z|00194|connmgr|INFO|br0<->unix#806: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:10:42.170Z|00195|bridge|INFO|bridge br0: deleted interface vethcc66ebe9 on port 8\n2020-09-23T13:11:25.497Z|00196|connmgr|INFO|br0<->unix#838: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:11:25.535Z|00197|connmgr|INFO|br0<->unix#841: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:11:25.560Z|00198|bridge|INFO|bridge br0: deleted interface veth8a8c100b on port 5\n2020-09-23T13:11:25.549Z|00017|jsonrpc|WARN|Dropped 2 log messages in last 841 seconds (most recently, 841 seconds ago) due to excessive rate\n2020-09-23T13:11:25.549Z|00018|jsonrpc|WARN|unix#746: receive error: Connection reset by peer\n2020-09-23T13:11:25.549Z|00019|reconnect|WARN|unix#746: connection dropped (Connection reset by peer)\n2020-09-23T13:11:25.554Z|00020|jsonrpc|WARN|unix#747: receive error: Connection reset by peer\n2020-09-23T13:11:25.554Z|00021|reconnect|WARN|unix#747: connection dropped (Connection reset by peer)\n2020-09-23T13:11:25.727Z|00022|jsonrpc|WARN|unix#750: receive error: Connection reset by peer\n2020-09-23T13:11:25.727Z|00023|reconnect|WARN|unix#750: connection dropped (Connection reset by peer)\n2020-09-23 13:14:37 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 23 13:16:24.596 E ns/openshift-multus pod/multus-plm68 node/ip-10-0-142-66.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Sep 23 13:16:24.602 E ns/openshift-cluster-node-tuning-operator pod/tuned-hbb7w node/ip-10-0-142-66.us-west-2.compute.internal container=tuned container exited with code 143 (Error): or/tuned-lwg9q) labels changed node wide: true\nI0923 13:14:06.625292  104411 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:14:06.626922  104411 openshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0923 13:14:06.628169  104411 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:14:06.750786  104411 openshift-tuned.go:635] Active profile () != recommended profile (openshift-node)\nI0923 13:14:06.750840  104411 openshift-tuned.go:263] Starting tuned...\n2020-09-23 13:14:06,864 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-09-23 13:14:06,869 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-09-23 13:14:06,870 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-09-23 13:14:06,871 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-09-23 13:14:06,872 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-09-23 13:14:06,905 INFO     tuned.daemon.controller: starting controller\n2020-09-23 13:14:06,905 INFO     tuned.daemon.daemon: starting tuning\n2020-09-23 13:14:06,911 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-09-23 13:14:06,912 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-09-23 13:14:06,915 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-09-23 13:14:06,917 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-09-23 13:14:06,919 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-09-23 13:14:07,025 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-09-23 13:14:07,027 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0923 13:14:37.023530  104411 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\n
Sep 23 13:16:24.699 E ns/openshift-machine-config-operator pod/machine-config-daemon-zcrsm node/ip-10-0-142-66.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 23 13:16:33.014 E ns/openshift-machine-config-operator pod/machine-config-daemon-zcrsm node/ip-10-0-142-66.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 23 13:16:41.883 E ns/openshift-marketplace pod/redhat-operators-78f89f48b7-plk68 node/ip-10-0-157-63.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Sep 23 13:16:42.908 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 20/09/23 12:55:24 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/23 12:55:24 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/09/23 12:55:24 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/09/23 12:55:24 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/09/23 12:55:24 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/09/23 12:55:24 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/09/23 12:55:24 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/09/23 12:55:24 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/09/23 12:55:24 http.go:106: HTTPS: listening on [::]:9091\n2020/09/23 12:59:32 oauthproxy.go:774: basicauth: 10.131.0.26:57170 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/23 13:04:03 oauthproxy.go:774: basicauth: 10.131.0.26:33624 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/23 13:07:39 oauthproxy.go:774: basicauth: 10.128.2.27:43060 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/23 13:09:19 oauthproxy.go:774: basicauth: 10.130.0.66:37448 Authorization header does not start with 'Basic', skipping basic authentication\n2020/09/23 13:10:36 oauthproxy.go:774: basicauth: 10.131.0.12:56654 Authorization header does not start with 'Basic', skipping basic authentication\n202
Sep 23 13:16:42.908 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/09/23 12:54:56 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Sep 23 13:16:42.908 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-63.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): rr="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-23T12:54:57.970707278Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-23T12:55:02.970768879Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-23T12:55:07.970686538Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-23T12:55:12.970675193Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-23T12:55:17.970782626Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-23T12:55:22.970640454Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-09-23T12:55:27.970750412Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http:/
Sep 23 13:16:42.942 E ns/openshift-marketplace pod/community-operators-559f8c6476-ctncs node/ip-10-0-157-63.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Sep 23 13:17:34.079 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error):   1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-0.ci-op-k1p58lb0-71c76.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.132.196:2379: connect: connection refused". Reconnecting...\nI0923 13:14:58.446863       1 store.go:1342] Monitoring clusteroperators.config.openshift.io count at <storage-prefix>//config.openshift.io/clusteroperators\nW0923 13:14:58.818309       1 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured\nI0923 13:14:58.818500       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0923 13:14:58.818971       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0923 13:14:58.850814       1 controller.go:141] slow openapi aggregation of "clusteroperators.config.openshift.io": 1.039831103s\nI0923 13:14:58.854465       1 controller.go:188] Updating CRD OpenAPI spec because clusterversions.config.openshift.io changed\nI0923 13:14:58.906864       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-132-196.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0923 13:14:58.907050       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nW0923 13:14:58.946860       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.131.130 10.0.147.217]\nI0923 13:14:58.961409       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-132-196.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n
Sep 23 13:17:34.079 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0923 12:50:40.106093       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Sep 23 13:17:34.079 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0923 13:12:21.752708       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:12:21.753039       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0923 13:12:21.958428       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:12:21.958646       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Sep 23 13:17:34.105 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error): :12:08.126961       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nI0923 13:12:39.128760       1 trace.go:81] Trace[1579657614]: "Reflector github.com/openshift/client-go/build/informers/externalversions/factory.go:101 ListAndWatch" (started: 2020-09-23 13:12:09.127101774 +0000 UTC m=+1188.650105802) (total time: 30.001620832s):\nTrace[1579657614]: [30.001620832s] [30.001620832s] END\nE0923 13:12:39.128787       1 reflector.go:126] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0923 13:12:40.225518       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0923 13:12:43.296755       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0923 13:12:46.368854       1 reflector.go:126] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0923 13:13:20.166342       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\nE0923 13:13:23.235619       1 reflector.go:126] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\n
Sep 23 13:17:34.105 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:13:42.298683       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:13:42.299017       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:13:52.305023       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:13:52.305470       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:14:02.313510       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:14:02.314144       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:14:12.322907       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:14:12.323888       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:14:22.331678       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:14:22.332781       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:14:32.343320       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:14:32.343657       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:14:42.349680       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:14:42.349975       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0923 13:14:52.357033       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0923 13:14:52.357296       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Sep 23 13:17:34.105 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): er af97523be598742649a753971ecc10a8\nI0923 13:14:57.858983       1 aws_loadbalancer.go:1386] Instances removed from load-balancer af97523be598742649a753971ecc10a8\nI0923 13:14:58.235203       1 service_controller.go:703] Successfully updated 2 out of 2 load balancers to direct traffic to the updated set of nodes\nI0923 13:14:58.235280       1 event.go:255] Event(v1.ObjectReference{Kind:"Service", Namespace:"e2e-k8s-service-lb-available-240", Name:"service-test", UID:"f97523be-5987-4264-9a75-3971ecc10a85", APIVersion:"v1", ResourceVersion:"18996", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\nI0923 13:14:58.370156       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"d6719f50-b246-49f2-944d-be71ffd693bd", APIVersion:"apps/v1", ResourceVersion:"37264", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-548b8f7b9b to 0\nI0923 13:14:58.370640       1 replica_set.go:608] Too many replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-548b8f7b9b, need 0, deleting 1\nI0923 13:14:58.370676       1 replica_set.go:226] Found 7 related ReplicaSets for ReplicaSet openshift-operator-lifecycle-manager/packageserver-548b8f7b9b: packageserver-c6c7c55cb, packageserver-bd96f8776, packageserver-747946cdfd, packageserver-5994689bc8, packageserver-58dc98898f, packageserver-567cc495d8, packageserver-548b8f7b9b\nI0923 13:14:58.370773       1 controller_utils.go:602] Controller packageserver-548b8f7b9b deleting pod openshift-operator-lifecycle-manager/packageserver-548b8f7b9b-t5prc\nI0923 13:14:58.401003       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-548b8f7b9b", UID:"646441c7-f5dd-464b-b99e-a15417986eae", APIVersion:"apps/v1", ResourceVersion:"37612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-548b8f7b9b-t5prc\n
Sep 23 13:17:34.117 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-196.us-west-2.compute.internal node/ip-10-0-132-196.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): .561567       1 scheduler.go:667] pod openshift-cluster-node-tuning-operator/tuned-pj7gj is bound successfully on node "ip-10-0-131-130.us-west-2.compute.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<15944120Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<14793144Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0923 13:14:57.561800       1 scheduler.go:667] pod openshift-cluster-node-tuning-operator/tuned-4cz9c is bound successfully on node "ip-10-0-157-63.us-west-2.compute.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416940Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0923 13:14:57.618883       1 scheduler.go:667] pod openshift-cluster-node-tuning-operator/tuned-dg2j8 is bound successfully on node "ip-10-0-142-66.us-west-2.compute.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416932Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265956Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0923 13:14:57.623234       1 scheduler.go:667] pod openshift-cluster-node-tuning-operator/tuned-rh5r8 is bound successfully on node "ip-10-0-132-196.us-west-2.compute.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16116152Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<14965176Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0923 13:14:57.629532       1 scheduler.go:667] pod openshift-cluster-node-tuning-operator/tuned-c88h2 is bound successfully on node "ip-10-0-142-97.us-west-2.compute.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416940Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\n
Sep 23 13:17:34.130 E ns/openshift-controller-manager pod/controller-manager-qfglg node/ip-10-0-132-196.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Sep 23 13:17:34.171 E ns/openshift-sdn pod/sdn-controller-vqzk6 node/ip-10-0-132-196.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): 24       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0923 12:57:11.332332       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"19995a17-302e-4249-84e7-22f87956baea", ResourceVersion:"27172", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736460892, loc:(*time.Location)(0x2b7dcc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-132-196\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-09-23T12:28:12Z\",\"renewTime\":\"2020-09-23T12:57:11Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-132-196 became leader'\nI0923 12:57:11.332553       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0923 12:57:11.336543       1 master.go:51] Initializing SDN master\nI0923 12:57:11.346489       1 network_controller.go:60] Started OpenShift Network Controller\nW0923 13:08:03.741577       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 23403 (32230)\nW0923 13:08:03.741700       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 21910 (32230)\n
Sep 23 13:17:34.194 E ns/openshift-monitoring pod/node-exporter-5lsgg node/ip-10-0-132-196.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 9-23T12:55:21Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-23T12:55:21Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 23 13:17:34.253 E ns/openshift-multus pod/multus-admission-controller-p72mf node/ip-10-0-132-196.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Sep 23 13:17:34.267 E ns/openshift-multus pod/multus-txczr node/ip-10-0-132-196.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Sep 23 13:17:34.278 E ns/openshift-sdn pod/ovs-hbg5w node/ip-10-0-132-196.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): 23T13:14:28.702Z|00287|connmgr|INFO|br0<->unix#1051: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:14:28.729Z|00288|bridge|INFO|bridge br0: deleted interface veth120b3c8e on port 13\n2020-09-23T13:14:28.778Z|00289|connmgr|INFO|br0<->unix#1054: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:14:28.913Z|00290|connmgr|INFO|br0<->unix#1057: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:14:28.964Z|00291|bridge|INFO|bridge br0: deleted interface veth63a599b3 on port 21\n2020-09-23T13:14:29.466Z|00017|jsonrpc|WARN|Dropped 2 log messages in last 957 seconds (most recently, 957 seconds ago) due to excessive rate\n2020-09-23T13:14:29.466Z|00018|jsonrpc|WARN|unix#943: receive error: Connection reset by peer\n2020-09-23T13:14:29.466Z|00019|reconnect|WARN|unix#943: connection dropped (Connection reset by peer)\n2020-09-23T13:14:29.136Z|00292|connmgr|INFO|br0<->unix#1060: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:14:29.188Z|00293|connmgr|INFO|br0<->unix#1063: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:14:29.219Z|00294|bridge|INFO|bridge br0: deleted interface veth41f0fc3a on port 34\n2020-09-23T13:14:29.391Z|00295|connmgr|INFO|br0<->unix#1066: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:14:29.436Z|00296|connmgr|INFO|br0<->unix#1069: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:14:29.467Z|00297|bridge|INFO|bridge br0: deleted interface veth2f1aea9c on port 16\n2020-09-23T13:14:46.699Z|00020|jsonrpc|WARN|unix#961: receive error: Connection reset by peer\n2020-09-23T13:14:46.699Z|00021|reconnect|WARN|unix#961: connection dropped (Connection reset by peer)\n2020-09-23T13:14:46.654Z|00298|connmgr|INFO|br0<->unix#1088: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:14:46.682Z|00299|connmgr|INFO|br0<->unix#1091: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:14:46.705Z|00300|bridge|INFO|bridge br0: deleted interface veth79cb54b6 on port 35\n2020-09-23 13:14:58 info: Saving flows ...\n2020-09-23T13:14:58Z|00001|fatal_signal|WARN|terminating with signal 15 (Terminated)\n
Sep 23 13:17:34.343 E ns/openshift-machine-config-operator pod/machine-config-server-2xf5g node/ip-10-0-132-196.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0923 13:07:24.280278       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-12-g747de90f-dirty (747de90fbfb379582694160dcc1181734c795695)\nI0923 13:07:24.281481       1 api.go:56] Launching server on :22624\nI0923 13:07:24.281793       1 api.go:56] Launching server on :22623\n
Sep 23 13:17:34.353 E ns/openshift-machine-config-operator pod/machine-config-daemon-btwcg node/ip-10-0-132-196.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 23 13:17:40.723 E ns/openshift-multus pod/multus-txczr node/ip-10-0-132-196.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 23 13:17:43.936 E ns/openshift-machine-config-operator pod/machine-config-daemon-btwcg node/ip-10-0-132-196.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Sep 23 13:19:22.949 E ns/openshift-monitoring pod/node-exporter-l4vvk node/ip-10-0-157-63.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 9-23T12:54:20Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-09-23T12:54:20Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Sep 23 13:19:22.980 E ns/openshift-sdn pod/ovs-6c25k node/ip-10-0-157-63.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): thde37176a on port 10\n2020-09-23T13:16:42.428Z|00196|connmgr|INFO|br0<->unix#977: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:16:42.508Z|00197|connmgr|INFO|br0<->unix#980: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:16:42.537Z|00198|bridge|INFO|bridge br0: deleted interface veth7a4ec67a on port 9\n2020-09-23T13:16:42.578Z|00199|connmgr|INFO|br0<->unix#983: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:16:42.612Z|00200|connmgr|INFO|br0<->unix#986: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:16:42.642Z|00201|bridge|INFO|bridge br0: deleted interface veth83ba286a on port 21\n2020-09-23T13:17:10.362Z|00202|connmgr|INFO|br0<->unix#1008: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:17:10.389Z|00203|connmgr|INFO|br0<->unix#1011: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:17:10.410Z|00204|bridge|INFO|bridge br0: deleted interface vethc740090f on port 18\n2020-09-23T13:17:10.669Z|00205|connmgr|INFO|br0<->unix#1014: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:17:10.696Z|00206|connmgr|INFO|br0<->unix#1017: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:17:10.718Z|00207|bridge|INFO|bridge br0: deleted interface veth067c18c0 on port 14\n2020-09-23T13:17:25.715Z|00025|jsonrpc|WARN|unix#930: receive error: Connection reset by peer\n2020-09-23T13:17:25.715Z|00026|reconnect|WARN|unix#930: connection dropped (Connection reset by peer)\n2020-09-23T13:17:25.719Z|00027|jsonrpc|WARN|unix#931: receive error: Connection reset by peer\n2020-09-23T13:17:25.720Z|00028|reconnect|WARN|unix#931: connection dropped (Connection reset by peer)\n2020-09-23T13:17:25.677Z|00208|connmgr|INFO|br0<->unix#1033: 2 flow_mods in the last 0 s (2 deletes)\n2020-09-23T13:17:25.704Z|00209|connmgr|INFO|br0<->unix#1036: 4 flow_mods in the last 0 s (4 deletes)\n2020-09-23T13:17:25.726Z|00210|bridge|INFO|bridge br0: deleted interface veth028eae4f on port 16\n2020-09-23 13:17:37 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Sep 23 13:19:23.015 E ns/openshift-multus pod/multus-nrnh6 node/ip-10-0-157-63.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Sep 23 13:19:23.025 E ns/openshift-machine-config-operator pod/machine-config-daemon-dh6mv node/ip-10-0-157-63.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Sep 23 13:19:23.026 E ns/openshift-cluster-node-tuning-operator pod/tuned-4cz9c node/ip-10-0-157-63.us-west-2.compute.internal container=tuned container exited with code 143 (Error): ping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:16:18.566331  125324 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:16:18.692337  125324 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 13:16:47.195503  125324 openshift-tuned.go:550] Pod (openshift-marketplace/community-operators-79d8558b5d-7qfgj) labels changed node wide: true\nI0923 13:16:48.564763  125324 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:16:48.566504  125324 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:16:48.678412  125324 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 13:16:57.202958  125324 openshift-tuned.go:550] Pod (openshift-marketplace/certified-operators-cdf664879-ws27r) labels changed node wide: true\nI0923 13:16:58.564769  125324 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:16:58.566349  125324 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:16:58.677099  125324 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 13:17:11.967017  125324 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-5031/foo-xc69z) labels changed node wide: false\nI0923 13:17:17.188170  125324 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-5031/foo-hrwhn) labels changed node wide: true\nI0923 13:17:18.564757  125324 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0923 13:17:18.566188  125324 openshift-tuned.go:441] Getting recommended profile...\nI0923 13:17:18.677241  125324 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0923 13:17:37.186095  125324 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-240/service-test-wgrct) labels changed node wide: true\n
Sep 23 13:19:26.616 E ns/openshift-multus pod/multus-nrnh6 node/ip-10-0-157-63.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 23 13:19:28.659 E ns/openshift-multus pod/multus-nrnh6 node/ip-10-0-157-63.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Sep 23 13:19:31.671 E ns/openshift-machine-config-operator pod/machine-config-daemon-dh6mv node/ip-10-0-157-63.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error):