ResultSUCCESS
Tests 3 failed / 20 succeeded
Started2020-02-26 10:07
Elapsed1h20m
Work namespaceci-op-0dc7lsdl
Refs release-4.3:3ce21b38
298:d955283e
podb0e1c5ba-587f-11ea-b192-0a58ac103020
repoopenshift/cluster-api-provider-aws
revision1

Test Failures


Cluster upgrade control-plane-upgrade 33m32s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\scontrol\-plane\-upgrade$'
API was unreachable during upgrade for at least 1m21s:

Feb 26 11:00:02.162 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 26 11:00:02.178 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:10:27.162 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 26 11:10:27.179 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:10:44.162 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 26 11:10:44.179 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:10:56.402 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:10:57.162 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:10:59.492 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:08.689 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:09.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:11:09.179 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:14.833 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:14.850 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:17.905 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:17.922 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:20.977 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:21.008 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:24.051 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:24.076 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:30.193 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:30.210 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:33.265 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:33.282 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:36.337 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:36.353 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:39.409 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:39.431 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:42.481 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:42.503 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:45.553 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:46.162 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:11:48.641 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:51.697 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:52.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:11:52.180 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:11:57.841 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:11:58.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:11:58.179 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:12:00.913 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:12:00.933 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:12:03.985 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:12:04.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:12:04.178 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:13:36.162 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 26 11:13:36.182 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:13:53.162 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 26 11:13:53.180 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:13:54.917 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:13:55.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:13:55.180 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:13:57.991 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:13:58.061 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:14:04.135 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:14:04.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:14:04.229 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:14:07.205 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:14:08.162 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:14:10.297 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:14:13.350 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:14:14.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:14:14.181 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:14:19.495 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:14:20.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:14:20.180 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:14:22.566 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:14:22.585 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:14:28.710 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:14:29.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:14:29.179 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:14:34.854 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:14:35.162 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:14:41.018 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:14:44.070 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:14:44.162 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:14:50.234 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:14:56.358 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:14:57.162 - 17s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:15:14.808 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:15:17.862 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:15:18.162 - 4s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:15:24.041 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:15:30.150 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:15:30.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:15:30.171 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:16:21.418 E kube-apiserver Kube API started failing: Get https://api.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: unexpected EOF
Feb 26 11:16:22.162 - 999ms E kube-apiserver Kube API is not responding to GET requests
Feb 26 11:16:22.222 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 34.237.208.53:6443: connect: connection refused
Feb 26 11:16:23.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:16:23.177 I kube-apiserver Kube API started responding to GET requests
Feb 26 11:16:23.207 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:16:39.162 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 26 11:16:39.184 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:16:55.162 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 26 11:16:55.189 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:17:00.231 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:17:00.250 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:17:06.376 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:17:07.162 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:17:09.471 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:17:12.519 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:17:13.162 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:17:15.611 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:17:18.663 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:17:18.686 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:17:21.735 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:17:21.755 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:17:24.807 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:17:25.162 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:17:40.188 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:17:43.240 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:17:43.260 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:17:46.312 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:17:47.162 - 4s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:17:52.475 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:17:55.529 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:17:56.162 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:17:58.620 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:18:01.673 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:18:02.162 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:18:04.769 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:18:07.815 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:18:08.162 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:18:08.184 I openshift-apiserver OpenShift API started responding to GET requests
Feb 26 11:18:23.175 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 26 11:18:23.196 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1582716028.xml

Filter through log files


Cluster upgrade k8s-service-upgrade 34m2s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sk8s\-service\-upgrade$'
Service was unreachable during upgrade for at least 27s:

Feb 26 10:48:44.233 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests on reused connections
Feb 26 10:48:44.233 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests over new connections
Feb 26 10:48:45.225 - 999ms E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service is not responding to GET requests on reused connections
Feb 26 10:48:45.225 - 999ms E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service is not responding to GET requests over new connections
Feb 26 10:48:46.288 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests on reused connections
Feb 26 10:48:46.288 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests over new connections
Feb 26 10:48:47.271 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests over new connections
Feb 26 10:48:48.225 - 4s    E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service is not responding to GET requests over new connections
Feb 26 10:48:53.289 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests over new connections
Feb 26 10:48:54.232 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests over new connections
Feb 26 10:48:55.225 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service is not responding to GET requests over new connections
Feb 26 10:48:55.261 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests over new connections
Feb 26 10:48:56.235 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests over new connections
Feb 26 10:48:57.225 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service is not responding to GET requests over new connections
Feb 26 10:48:57.282 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests over new connections
Feb 26 10:48:58.239 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests over new connections
Feb 26 10:48:59.225 - 1s    E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service is not responding to GET requests over new connections
Feb 26 10:49:00.276 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests over new connections
Feb 26 10:49:01.231 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests over new connections
Feb 26 10:49:02.225 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service is not responding to GET requests over new connections
Feb 26 10:49:02.266 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests over new connections
Feb 26 11:01:22.227 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests on reused connections
Feb 26 11:01:22.290 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests on reused connections
Feb 26 11:16:46.226 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests over new connections
Feb 26 11:16:46.284 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests over new connections
Feb 26 11:17:38.226 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests over new connections
Feb 26 11:17:38.289 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests over new connections
Feb 26 11:17:52.226 E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service stopped responding to GET requests over new connections
Feb 26 11:17:53.225 - 5s    E ns/e2e-k8s-service-upgrade-5134 svc/service-test Service is not responding to GET requests over new connections
Feb 26 11:17:58.615 I ns/e2e-k8s-service-upgrade-5134 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1582716028.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 34m6s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
229 error level events were detected during this test run:

Feb 26 10:49:21.518 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-84f8ffb847-pbpwz node/ip-10-0-138-1.ec2.internal container=kube-apiserver-operator container exited with code 255 (Error): 92547d65429", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "ip-10-0-138-1.ec2.internal" from revision 2 to 6 because static pod is ready\nI0226 10:46:00.072513       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"bcb9ecff-6750-4737-b6c9-992547d65429", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("Progressing: 3 nodes are at revision 6"),Available message changed from "Available: 3 nodes are active; 1 nodes are at revision 2; 2 nodes are at revision 6" to "Available: 3 nodes are active; 3 nodes are at revision 6"\nI0226 10:46:02.041940       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"bcb9ecff-6750-4737-b6c9-992547d65429", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-6 -n openshift-kube-apiserver: cause by changes in data.status\nI0226 10:46:09.442808       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"bcb9ecff-6750-4737-b6c9-992547d65429", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ip-10-0-138-1.ec2.internal -n openshift-kube-apiserver because it was missing\nW0226 10:49:18.945975       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18782 (19051)\nI0226 10:49:20.791314       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0226 10:49:20.791447       1 leaderelection.go:66] leaderelection lost\n
Feb 26 10:50:58.818 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-55546b87f5-b297g node/ip-10-0-138-1.ec2.internal container=kube-controller-manager-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:51:08.875 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7576549cd9-b6qd5 node/ip-10-0-138-1.ec2.internal container=kube-scheduler-operator-container container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:52:35.114 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-68b55bf5b4-2bpfn node/ip-10-0-138-1.ec2.internal container=openshift-apiserver-operator container exited with code 255 (Error): .059035       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.APIServer ended with: too old resource version: 8083 (15213)\nW0226 10:45:40.074513       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 8348 (15217)\nW0226 10:45:40.086105       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14741 (15108)\nW0226 10:45:40.725819       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Project ended with: too old resource version: 8101 (15221)\nW0226 10:45:40.728974       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 16421 (17187)\nW0226 10:45:40.733655       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 10204 (15108)\nW0226 10:45:40.757502       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 9144 (15108)\nW0226 10:45:40.759561       1 reflector.go:299] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: watch of *unstructured.Unstructured ended with: too old resource version: 11335 (15257)\nW0226 10:45:40.796522       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 11335 (15257)\nW0226 10:49:18.911623       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18782 (19051)\nI0226 10:52:34.140195       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0226 10:52:34.140261       1 leaderelection.go:66] leaderelection lost\n
Feb 26 10:53:26.606 E ns/openshift-machine-api pod/machine-api-operator-b4d4868b4-ss2h9 node/ip-10-0-138-1.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 26 10:54:30.506 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-1.ec2.internal node/ip-10-0-138-1.ec2.internal container=kube-apiserver-7 container exited with code 1 (Error): g   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default "10.0.0.0/24")\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Feb 26 10:56:01.006 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-143.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/26 10:42:53 Watching directory: "/etc/alertmanager/config"\n
Feb 26 10:56:01.006 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-143.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/26 10:42:53 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/26 10:42:53 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/26 10:42:53 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/26 10:42:53 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/26 10:42:53 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/26 10:42:53 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/26 10:42:53 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/26 10:42:53 http.go:96: HTTPS: listening on [::]:9095\n
Feb 26 10:56:07.959 E ns/openshift-machine-api pod/cluster-autoscaler-operator-554c4f74bd-dt6n8 node/ip-10-0-138-228.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:07.959 E ns/openshift-machine-api pod/cluster-autoscaler-operator-554c4f74bd-dt6n8 node/ip-10-0-138-228.ec2.internal container=cluster-autoscaler-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:12.872 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/26 10:44:08 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 26 10:56:12.872 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/26 10:44:08 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/26 10:44:08 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/26 10:44:08 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/26 10:44:08 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/26 10:44:08 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/26 10:44:08 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/26 10:44:08 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/26 10:44:08 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/26 10:44:08 http.go:96: HTTPS: listening on [::]:9091\n
Feb 26 10:56:12.872 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-26T10:44:08.056568901Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2020-02-26T10:44:08.056679526Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-26T10:44:08.059752711Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-26T10:44:13.18666251Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 26 10:56:14.539 E ns/openshift-service-ca-operator pod/service-ca-operator-7f49fc684f-qjqxt node/ip-10-0-138-1.ec2.internal container=operator container exited with code 255 (Error): 
Feb 26 10:56:16.465 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-143.ec2.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:16.465 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-143.ec2.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:16.465 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-143.ec2.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:22.287 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-143.ec2.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:22.287 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-143.ec2.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:22.287 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-143.ec2.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:22.662 E ns/openshift-controller-manager pod/controller-manager-ftgbf node/ip-10-0-154-226.ec2.internal container=controller-manager container exited with code 137 (Error): 
Feb 26 10:56:23.035 E ns/openshift-monitoring pod/node-exporter-fbn64 node/ip-10-0-154-226.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:38:39Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:38:39Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 10:56:24.198 E ns/openshift-ingress pod/router-default-5dd9868b4f-9kb57 node/ip-10-0-142-143.ec2.internal container=router container exited with code 2 (Error): ng http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 10:55:41.956550       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 10:55:46.959818       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 10:55:51.956975       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 10:55:56.964114       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 10:56:01.957876       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 10:56:06.970524       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 10:56:11.961050       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nE0226 10:56:16.968270       1 limiter.go:140] error reloading router: waitid: no child processes\n - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\nW0226 10:56:20.956940       1 reflector.go:299] github.com/openshift/router/pkg/router/template/service_lookup.go:32: watch of *v1.Service ended with: too old resource version: 17996 (20272)\nI0226 10:56:23.075952       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 26 10:56:27.904 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:27.904 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:27.904 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:27.904 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:27.904 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:27.904 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:27.904 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:56:27.988 E ns/openshift-monitoring pod/grafana-667769cf99-g9qgf node/ip-10-0-147-70.ec2.internal container=grafana-proxy container exited with code 2 (Error): 
Feb 26 10:56:28.169 E ns/openshift-monitoring pod/openshift-state-metrics-6547ffbcb7-hqmnk node/ip-10-0-129-96.ec2.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 26 10:56:28.230 E ns/openshift-monitoring pod/prometheus-adapter-67f5889947-ntrrk node/ip-10-0-142-143.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0226 10:38:17.726146       1 adapter.go:93] successfully using in-cluster auth\nI0226 10:38:18.514919       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 26 10:56:30.044 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/26 10:56:27 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 26 10:56:30.044 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/26 10:56:28 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/26 10:56:28 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/26 10:56:28 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/26 10:56:28 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/26 10:56:28 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/26 10:56:28 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/26 10:56:28 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/26 10:56:28 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/26 10:56:28 http.go:96: HTTPS: listening on [::]:9091\n
Feb 26 10:56:30.044 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-26T10:56:27.178212158Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2020-02-26T10:56:27.178346695Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-26T10:56:27.186055939Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\n
Feb 26 10:56:31.971 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-226.ec2.internal node/ip-10-0-154-226.ec2.internal container=scheduler container exited with code 255 (Error): es evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0226 10:56:31.390178       1 scheduler.go:667] pod openshift-controller-manager/controller-manager-z2lns is bound successfully on node "ip-10-0-154-226.ec2.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0226 10:56:31.390458       1 scheduler.go:667] pod openshift-marketplace/community-operators-d945859db-7fgnd is bound successfully on node "ip-10-0-142-143.ec2.internal", 6 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0226 10:56:31.408441       1 scheduler.go:667] pod openshift-image-registry/node-ca-2d6w2 is bound successfully on node "ip-10-0-142-143.ec2.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0226 10:56:31.450496       1 scheduler.go:667] pod openshift-marketplace/certified-operators-597944b69c-mz5bj is bound successfully on node "ip-10-0-142-143.ec2.internal", 6 nodes evaluated, 3 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0226 10:56:31.725021       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0226 10:56:31.725056       1 server.go:264] leaderelection lost\n
Feb 26 10:56:37.196 E ns/openshift-monitoring pod/node-exporter-sw894 node/ip-10-0-129-96.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:37:39Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:39Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 10:56:45.701 E ns/openshift-cluster-node-tuning-operator pod/tuned-p8mzt node/ip-10-0-138-1.ec2.internal container=tuned container exited with code 143 (Error): 56 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:55:54.106455   26056 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 10:56:02.301834   26056 openshift-tuned.go:550] Pod (openshift-service-ca-operator/service-ca-operator-86784577d8-w9fg6) labels changed node wide: true\nI0226 10:56:03.854802   26056 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 10:56:03.856904   26056 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:56:04.041262   26056 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 10:56:08.234081   26056 openshift-tuned.go:550] Pod (openshift-controller-manager-operator/openshift-controller-manager-operator-7b777cccdd-cllxm) labels changed node wide: true\nI0226 10:56:08.854856   26056 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 10:56:08.856625   26056 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:56:09.017175   26056 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 10:56:18.226715   26056 openshift-tuned.go:550] Pod (openshift-service-ca-operator/service-ca-operator-7f49fc684f-qjqxt) labels changed node wide: true\nI0226 10:56:18.848331   26056 openshift-tuned.go:852] Lowering resyncPeriod to 55\nI0226 10:56:18.854764   26056 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 10:56:18.856487   26056 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:56:19.070440   26056 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0226 10:56:20.990331   26056 openshift-tuned.go:881] Pod event watch channel closed.\nI0226 10:56:20.990544   26056 openshift-tuned.go:883] Increasing resyncPeriod to 110\n
Feb 26 10:56:46.110 E ns/openshift-cluster-node-tuning-operator pod/tuned-rskns node/ip-10-0-154-226.ec2.internal container=tuned container exited with code 143 (Error):   23469 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-7-ip-10-0-154-226.ec2.internal) labels changed node wide: false\nI0226 10:54:55.739035   23469 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-7-ip-10-0-154-226.ec2.internal) labels changed node wide: false\nI0226 10:54:59.104528   23469 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/kube-controller-manager-ip-10-0-154-226.ec2.internal) labels changed node wide: true\nI0226 10:54:59.867260   23469 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 10:54:59.869548   23469 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:55:00.067067   23469 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 10:55:10.889185   23469 openshift-tuned.go:550] Pod (openshift-kube-apiserver/kube-apiserver-ip-10-0-154-226.ec2.internal) labels changed node wide: true\nI0226 10:55:14.867245   23469 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 10:55:14.869387   23469 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:55:15.007323   23469 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 10:55:35.275425   23469 openshift-tuned.go:550] Pod (openshift-cluster-storage-operator/cluster-storage-operator-56bbcbf687-gxzvc) labels changed node wide: true\nI0226 10:55:39.867200   23469 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 10:55:39.868974   23469 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:55:39.988958   23469 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nE0226 10:56:20.988269   23469 openshift-tuned.go:881] Pod event watch channel closed.\nI0226 10:56:20.988399   23469 openshift-tuned.go:883] Increasing resyncPeriod to 232\n
Feb 26 10:56:46.225 E ns/openshift-cluster-node-tuning-operator pod/tuned-qm9s7 node/ip-10-0-129-96.ec2.internal container=tuned container exited with code 143 (Error): 6688-n88dp) labels changed node wide: true\nI0226 10:55:52.921674    2635 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 10:55:52.923632    2635 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:55:53.039714    2635 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 10:55:53.926346    2635 openshift-tuned.go:550] Pod (openshift-image-registry/image-registry-6645d9c645-wmqlf) labels changed node wide: true\nI0226 10:55:57.921723    2635 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 10:55:57.923671    2635 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:55:58.111308    2635 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 10:56:00.767448    2635 openshift-tuned.go:550] Pod (openshift-monitoring/telemeter-client-6c48544f9f-2n4j2) labels changed node wide: true\nI0226 10:56:02.921696    2635 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 10:56:02.923677    2635 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:56:03.167955    2635 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 10:56:05.477302    2635 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-adapter-778b9dd58d-8rr57) labels changed node wide: true\nI0226 10:56:07.921737    2635 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 10:56:07.923905    2635 openshift-tuned.go:441] Getting recommended profile...\nI0226 10:56:08.115630    2635 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nE0226 10:56:20.970062    2635 openshift-tuned.go:881] Pod event watch channel closed.\nI0226 10:56:20.970086    2635 openshift-tuned.go:883] Increasing resyncPeriod to 138\n
Feb 26 10:56:50.297 E ns/openshift-monitoring pod/thanos-querier-c6474f66f-9fdwj node/ip-10-0-142-143.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/26 10:44:04 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/26 10:44:04 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/26 10:44:04 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/26 10:44:04 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/26 10:44:04 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/26 10:44:04 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/26 10:44:04 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/26 10:44:04 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/26 10:44:04 http.go:96: HTTPS: listening on [::]:9091\n
Feb 26 10:56:51.310 E ns/openshift-monitoring pod/node-exporter-8tljv node/ip-10-0-142-143.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:37:37Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:37Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 10:56:58.339 E ns/openshift-marketplace pod/redhat-operators-5cf88fff6f-885mw node/ip-10-0-142-143.ec2.internal container=redhat-operators container exited with code 2 (Error): 
Feb 26 10:57:01.232 E ns/openshift-monitoring pod/node-exporter-79cfd node/ip-10-0-138-228.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:37:59Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:59Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 10:57:02.381 E ns/openshift-monitoring pod/prometheus-adapter-67f5889947-krwlt node/ip-10-0-142-143.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0226 10:38:17.661689       1 adapter.go:93] successfully using in-cluster auth\nI0226 10:38:18.829580       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 26 10:57:03.168 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-26T10:56:58.053Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-26T10:56:58.061Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-26T10:56:58.061Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-26T10:56:58.062Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-26T10:56:58.062Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-26T10:56:58.062Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-26T10:56:58.063Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-26T10:56:58.063Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-26T10:56:58.063Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-26T10:56:58.063Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-26T10:56:58.063Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-26T10:56:58.063Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-26T10:56:58.063Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-26T10:56:58.063Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-26T10:56:58.064Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-26T10:56:58.064Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-26
Feb 26 10:57:27.535 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/26 10:44:40 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 26 10:57:27.535 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/26 10:44:40 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/26 10:44:40 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/26 10:44:40 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/26 10:44:40 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/26 10:44:40 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/26 10:44:40 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/26 10:44:40 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/26 10:44:40 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/26 10:44:40 http.go:96: HTTPS: listening on [::]:9091\n2020/02/26 10:48:25 oauthproxy.go:774: basicauth: 10.129.2.6:43834 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/26 10:49:27 oauthproxy.go:774: basicauth: 10.128.0.10:42812 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/26 10:52:55 oauthproxy.go:774: basicauth: 10.129.2.6:45816 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/26 10:56:02 oauthproxy.go:774: basicauth: 10.130.0.47:36044 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/26 10:56:26 oauthproxy.go:774: basicauth: 10.128.2.24:56356 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 26 10:57:27.535 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-26T10:44:39.918593953Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2020-02-26T10:44:39.918737218Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-26T10:44:39.921523366Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-26T10:44:45.048767507Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 26 10:57:29.823 E ns/openshift-service-ca pod/configmap-cabundle-injector-6489bbf95d-9qjc2 node/ip-10-0-154-226.ec2.internal container=configmap-cabundle-injector-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:57:29.847 E ns/openshift-service-ca pod/apiservice-cabundle-injector-fbbbcc676-vxlcp node/ip-10-0-154-226.ec2.internal container=apiservice-cabundle-injector-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 10:57:36.239 E ns/openshift-monitoring pod/node-exporter-fz59b node/ip-10-0-147-70.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:37:36Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:36Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 10:57:40.571 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-26T10:57:37.890Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-26T10:57:37.893Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-26T10:57:37.895Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-26T10:57:37.896Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-26T10:57:37.896Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-26T10:57:37.896Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-26T10:57:37.896Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-26T10:57:37.896Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-26T10:57:37.896Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-26T10:57:37.896Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-26T10:57:37.896Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-26T10:57:37.896Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-26T10:57:37.896Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-26T10:57:37.897Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-26T10:57:37.897Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=info ts=2020-02-26T10:57:37.897Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=error ts=2020-02-26
Feb 26 10:57:41.923 E ns/openshift-monitoring pod/node-exporter-jpxvn node/ip-10-0-138-1.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:37:32Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:37:32Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 10:58:36.050 E ns/openshift-console pod/console-6b46cf44fc-j97td node/ip-10-0-154-226.ec2.internal container=console container exited with code 2 (Error): 2020/02/26 10:42:13 cmd/main: cookies are secure!\n2020/02/26 10:42:13 cmd/main: Binding to [::]:8443...\n2020/02/26 10:42:13 cmd/main: using TLS\n2020/02/26 10:49:31 http: TLS handshake error from 10.128.2.7:37856: read tcp 10.129.0.41:8443->10.128.2.7:37856: read: connection reset by peer\n
Feb 26 10:58:57.124 E ns/openshift-controller-manager pod/controller-manager-z2lns node/ip-10-0-154-226.ec2.internal container=controller-manager container exited with code 137 (Error): 
Feb 26 10:59:50.334 E ns/openshift-sdn pod/sdn-controller-9spfw node/ip-10-0-138-1.ec2.internal container=sdn-controller container exited with code 2 (Error): I0226 10:28:34.916101       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 26 10:59:50.388 E ns/openshift-sdn pod/sdn-jvs7x node/ip-10-0-154-226.ec2.internal container=sdn container exited with code 255 (Error): :58:56.922771    3180 pod.go:539] CNI_DEL openshift-controller-manager/controller-manager-z2lns\nI0226 10:59:13.426372    3180 pod.go:503] CNI_ADD openshift-controller-manager/controller-manager-cl89s got IP 10.129.0.65, ofport 66\nI0226 10:59:21.254394    3180 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-controller-manager/controller-manager:https to [10.128.0.69:8443 10.129.0.65:8443 10.130.0.54:8443]\nI0226 10:59:21.254434    3180 roundrobin.go:218] Delete endpoint 10.129.0.65:8443 for service "openshift-controller-manager/controller-manager:https"\nI0226 10:59:21.254499    3180 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 10:59:21.454386    3180 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 10:59:21.533026    3180 proxier.go:371] userspace proxy: processing 0 service events\nI0226 10:59:21.533051    3180 proxier.go:350] userspace syncProxyRules took 78.64067ms\nI0226 10:59:21.533062    3180 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 10:59:43.232041    3180 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.5:6443 10.130.0.2:6443]\nI0226 10:59:43.232080    3180 roundrobin.go:218] Delete endpoint 10.128.0.13:6443 for service "openshift-multus/multus-admission-controller:"\nI0226 10:59:43.232131    3180 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 10:59:43.442766    3180 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 10:59:43.515971    3180 proxier.go:371] userspace proxy: processing 0 service events\nI0226 10:59:43.515997    3180 proxier.go:350] userspace syncProxyRules took 73.204952ms\nI0226 10:59:43.516008    3180 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 10:59:49.302883    3180 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0226 10:59:49.302950    3180 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 26 10:59:53.401 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:00:01.882 E ns/openshift-sdn pod/sdn-controller-2v5sp node/ip-10-0-138-228.ec2.internal container=sdn-controller container exited with code 2 (Error): I0226 10:28:34.081807       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 26 11:00:12.797 E ns/openshift-sdn pod/sdn-t6zp7 node/ip-10-0-129-96.ec2.internal container=sdn container exited with code 255 (Error): userspace proxy: processing 0 service events\nI0226 10:58:53.749349    2771 proxier.go:350] userspace syncProxyRules took 71.591701ms\nI0226 10:58:53.749360    2771 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 10:59:21.250498    2771 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-controller-manager/controller-manager:https to [10.128.0.69:8443 10.129.0.65:8443 10.130.0.54:8443]\nI0226 10:59:21.250542    2771 roundrobin.go:218] Delete endpoint 10.129.0.65:8443 for service "openshift-controller-manager/controller-manager:https"\nI0226 10:59:21.250616    2771 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 10:59:21.425270    2771 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 10:59:21.503240    2771 proxier.go:371] userspace proxy: processing 0 service events\nI0226 10:59:21.503263    2771 proxier.go:350] userspace syncProxyRules took 77.968175ms\nI0226 10:59:21.503273    2771 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 10:59:43.229715    2771 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.5:6443 10.130.0.2:6443]\nI0226 10:59:43.229746    2771 roundrobin.go:218] Delete endpoint 10.128.0.13:6443 for service "openshift-multus/multus-admission-controller:"\nI0226 10:59:43.229792    2771 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 10:59:43.406679    2771 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 10:59:43.479690    2771 proxier.go:371] userspace proxy: processing 0 service events\nI0226 10:59:43.479719    2771 proxier.go:350] userspace syncProxyRules took 73.016516ms\nI0226 10:59:43.479730    2771 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 11:00:12.376006    2771 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0226 11:00:12.376068    2771 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 26 11:00:13.874 E ns/openshift-multus pod/multus-2jpbg node/ip-10-0-142-143.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 26 11:00:24.815 E ns/openshift-service-ca pod/apiservice-cabundle-injector-6f74f465bc-sfq4p node/ip-10-0-154-226.ec2.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Feb 26 11:00:37.940 E ns/openshift-sdn pod/sdn-sjmfg node/ip-10-0-142-143.ec2.internal container=sdn container exited with code 255 (Error): 1   14113 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0226 11:00:02.024015   14113 cmd.go:173] openshift-sdn network plugin registering startup\nI0226 11:00:02.024120   14113 cmd.go:177] openshift-sdn network plugin ready\nI0226 11:00:28.475596   14113 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.72:6443 10.129.0.5:6443 10.130.0.2:6443]\nI0226 11:00:28.475655   14113 roundrobin.go:218] Delete endpoint 10.128.0.72:6443 for service "openshift-multus/multus-admission-controller:"\nI0226 11:00:28.475731   14113 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 11:00:28.488696   14113 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.72:6443 10.129.0.5:6443]\nI0226 11:00:28.488747   14113 roundrobin.go:218] Delete endpoint 10.130.0.2:6443 for service "openshift-multus/multus-admission-controller:"\nI0226 11:00:28.644369   14113 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 11:00:28.713912   14113 proxier.go:371] userspace proxy: processing 0 service events\nI0226 11:00:28.713937   14113 proxier.go:350] userspace syncProxyRules took 69.541241ms\nI0226 11:00:28.713948   14113 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 11:00:28.713959   14113 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 11:00:28.878362   14113 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 11:00:28.947510   14113 proxier.go:371] userspace proxy: processing 0 service events\nI0226 11:00:28.947535   14113 proxier.go:350] userspace syncProxyRules took 69.147607ms\nI0226 11:00:28.947546   14113 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 11:00:37.255659   14113 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0226 11:00:37.255704   14113 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 26 11:00:49.574 E ns/openshift-multus pod/multus-7gpw9 node/ip-10-0-129-96.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 26 11:01:14.676 E ns/openshift-sdn pod/sdn-wfvhg node/ip-10-0-147-70.ec2.internal container=sdn container exited with code 255 (Error): 28.474805   10351 roundrobin.go:218] Delete endpoint 10.128.0.72:6443 for service "openshift-multus/multus-admission-controller:"\nI0226 11:00:28.474864   10351 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 11:00:28.487996   10351 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.72:6443 10.129.0.5:6443]\nI0226 11:00:28.488034   10351 roundrobin.go:218] Delete endpoint 10.130.0.2:6443 for service "openshift-multus/multus-admission-controller:"\nI0226 11:00:28.648618   10351 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 11:00:28.717491   10351 proxier.go:371] userspace proxy: processing 0 service events\nI0226 11:00:28.717513   10351 proxier.go:350] userspace syncProxyRules took 68.872153ms\nI0226 11:00:28.717524   10351 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 11:00:28.717535   10351 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 11:00:28.885776   10351 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 11:00:28.955060   10351 proxier.go:371] userspace proxy: processing 0 service events\nI0226 11:00:28.955083   10351 proxier.go:350] userspace syncProxyRules took 69.283386ms\nI0226 11:00:28.955095   10351 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 11:00:58.955398   10351 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 11:00:59.129629   10351 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 11:00:59.198443   10351 proxier.go:371] userspace proxy: processing 0 service events\nI0226 11:00:59.198466   10351 proxier.go:350] userspace syncProxyRules took 68.813533ms\nI0226 11:00:59.198477   10351 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 11:01:14.544836   10351 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0226 11:01:14.544899   10351 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 26 11:01:32.032 E ns/openshift-multus pod/multus-pdvnp node/ip-10-0-154-226.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 26 11:01:34.287 E ns/openshift-sdn pod/sdn-dwlz9 node/ip-10-0-138-228.ec2.internal container=sdn container exited with code 255 (Error):  took 77.175891ms\nI0226 11:01:10.452877   10191 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 11:01:12.486168   10191 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-dznwh got IP 10.130.0.55, ofport 56\nI0226 11:01:16.216860   10191 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.72:6443 10.129.0.5:6443 10.130.0.55:6443]\nI0226 11:01:16.216908   10191 roundrobin.go:218] Delete endpoint 10.130.0.55:6443 for service "openshift-multus/multus-admission-controller:"\nI0226 11:01:16.216985   10191 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 11:01:16.235553   10191 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.72:6443 10.130.0.55:6443]\nI0226 11:01:16.235688   10191 roundrobin.go:218] Delete endpoint 10.129.0.5:6443 for service "openshift-multus/multus-admission-controller:"\nI0226 11:01:16.422454   10191 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 11:01:16.494183   10191 proxier.go:371] userspace proxy: processing 0 service events\nI0226 11:01:16.494214   10191 proxier.go:350] userspace syncProxyRules took 71.732008ms\nI0226 11:01:16.494230   10191 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 11:01:16.494246   10191 proxy.go:334] hybrid proxy: syncProxyRules start\nI0226 11:01:16.676521   10191 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0226 11:01:16.763714   10191 proxier.go:371] userspace proxy: processing 0 service events\nI0226 11:01:16.763743   10191 proxier.go:350] userspace syncProxyRules took 87.162318ms\nI0226 11:01:16.763759   10191 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0226 11:01:33.599916   10191 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0226 11:01:33.599961   10191 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 26 11:01:47.091 E ns/openshift-multus pod/multus-admission-controller-rmtpp node/ip-10-0-154-226.ec2.internal container=multus-admission-controller container exited with code 137 (OOMKilled): 
Feb 26 11:02:32.920 E ns/openshift-multus pod/multus-xqhqt node/ip-10-0-138-1.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 26 11:03:14.625 E ns/openshift-multus pod/multus-455ng node/ip-10-0-138-228.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 26 11:04:25.287 E ns/openshift-machine-config-operator pod/machine-config-operator-59dd46ffcc-89vhh node/ip-10-0-138-1.ec2.internal container=machine-config-operator container exited with code 2 (Error): s/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 15286 (21540)\nW0226 10:56:21.593451       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRoleBinding ended with: too old resource version: 15708 (20275)\nW0226 10:56:21.660134       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 15266 (21026)\nW0226 10:56:21.671137       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 21535 (21912)\nW0226 10:56:22.164350       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRole ended with: too old resource version: 15706 (20275)\nW0226 10:56:22.353456       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 21407 (21944)\nW0226 10:56:22.441843       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 15292 (21028)\nW0226 10:56:22.464246       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 18044 (20271)\nW0226 10:56:22.464373       1 reflector.go:299] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.CustomResourceDefinition ended with: too old resource version: 19045 (20270)\nW0226 10:56:22.464786       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 15211 (21556)\nW0226 10:56:22.468560       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 15280 (21555)\n
Feb 26 11:06:20.268 E ns/openshift-machine-config-operator pod/machine-config-daemon-v52l8 node/ip-10-0-138-228.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 26 11:06:39.357 E ns/openshift-machine-config-operator pod/machine-config-daemon-rgf59 node/ip-10-0-147-70.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 26 11:06:50.299 E ns/openshift-machine-config-operator pod/machine-config-daemon-hn9cl node/ip-10-0-129-96.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 26 11:07:02.720 E ns/openshift-machine-config-operator pod/machine-config-daemon-p9b2m node/ip-10-0-154-226.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 26 11:07:16.430 E ns/openshift-machine-config-operator pod/machine-config-controller-6fffb65d8b-rwdr7 node/ip-10-0-138-228.ec2.internal container=machine-config-controller container exited with code 2 (Error): :435] Pool master: node ip-10-0-138-1.ec2.internal is now reporting ready\nI0226 11:01:09.059805       1 node_controller.go:433] Pool master: node ip-10-0-154-226.ec2.internal is now reporting unready: node ip-10-0-154-226.ec2.internal is reporting NotReady=False\nI0226 11:01:39.204431       1 node_controller.go:433] Pool master: node ip-10-0-138-228.ec2.internal is now reporting unready: node ip-10-0-138-228.ec2.internal is reporting NotReady=False\nI0226 11:01:49.220660       1 node_controller.go:435] Pool master: node ip-10-0-138-228.ec2.internal is now reporting ready\nW0226 11:01:51.425056       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 27553 (27983)\nW0226 11:01:54.238900       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 27983 (28021)\nI0226 11:02:05.171344       1 node_controller.go:433] Pool master: node ip-10-0-138-1.ec2.internal is now reporting unready: node ip-10-0-138-1.ec2.internal is reporting NotReady=False\nI0226 11:02:09.097503       1 node_controller.go:435] Pool master: node ip-10-0-154-226.ec2.internal is now reporting ready\nI0226 11:02:45.214659       1 node_controller.go:435] Pool master: node ip-10-0-138-1.ec2.internal is now reporting ready\nI0226 11:02:49.264204       1 node_controller.go:433] Pool master: node ip-10-0-138-228.ec2.internal is now reporting unready: node ip-10-0-138-228.ec2.internal is reporting NotReady=False\nI0226 11:03:24.857720       1 node_controller.go:433] Pool worker: node ip-10-0-147-70.ec2.internal is now reporting unready: node ip-10-0-147-70.ec2.internal is reporting NotReady=False\nI0226 11:03:29.299440       1 node_controller.go:435] Pool master: node ip-10-0-138-228.ec2.internal is now reporting ready\nI0226 11:04:04.886475       1 node_controller.go:435] Pool worker: node ip-10-0-147-70.ec2.internal is now reporting ready\n
Feb 26 11:09:41.885 E ns/openshift-machine-config-operator pod/machine-config-server-949zq node/ip-10-0-138-228.ec2.internal container=machine-config-server container exited with code 2 (Error): I0226 10:33:43.966460       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-165-g1452cf64-dirty (1452cf640f3d96989ebbc88339c847b553c8fe3c)\nI0226 10:33:43.968260       1 api.go:51] Launching server on :22624\nI0226 10:33:43.968718       1 api.go:51] Launching server on :22623\n
Feb 26 11:09:52.140 E ns/openshift-marketplace pod/community-operators-d945859db-7fgnd node/ip-10-0-142-143.ec2.internal container=community-operators container exited with code 2 (Error): 
Feb 26 11:09:53.289 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:09:53.289 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:09:53.289 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:09:53.289 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:09:53.289 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:09:53.289 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:09:53.289 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-143.ec2.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:09:53.300 E ns/openshift-console-operator pod/console-operator-55586fd49f-swf76 node/ip-10-0-154-226.ec2.internal container=console-operator container exited with code 255 (Error):     1 status.go:73] SyncLoopRefreshProgressing InProgress Working toward version 0.0.1-2020-02-26-101225\nE0226 10:58:35.138146       1 status.go:73] DeploymentAvailable FailedUpdate 2 replicas ready at version 0.0.1-2020-02-26-101225\nE0226 10:58:35.262017       1 status.go:73] SyncLoopRefreshProgressing InProgress Working toward version 0.0.1-2020-02-26-101225\nE0226 10:58:35.262157       1 status.go:73] DeploymentAvailable FailedUpdate 2 replicas ready at version 0.0.1-2020-02-26-101225\nI0226 10:58:53.431144       1 status_controller.go:165] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-26T10:37:49Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-26T10:58:53Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-26T10:58:53Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-26T10:37:49Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0226 10:58:53.444251       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"6a569e65-1e32-4a3d-8344-0929101a5496", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False (""),Available changed from False to True ("")\nW0226 11:09:15.324024       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 31074 (31075)\nW0226 11:09:21.584828       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 31080 (31107)\nI0226 11:09:52.297016       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0226 11:09:52.297081       1 leaderelection.go:66] leaderelection lost\n
Feb 26 11:09:53.310 E ns/openshift-ingress pod/router-default-74c9845497-4b6gl node/ip-10-0-142-143.ec2.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:06:19.643108       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:06:29.368731       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:06:34.327995       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:06:39.320061       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:06:49.415706       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:06:54.405198       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:07:01.850911       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:07:06.843961       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:07:13.286717       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:09:51.140067       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 26 11:09:53.358 E ns/openshift-service-ca pod/configmap-cabundle-injector-6cb798f5-gq2ww node/ip-10-0-154-226.ec2.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Feb 26 11:09:53.370 E ns/openshift-marketplace pod/certified-operators-597944b69c-mz5bj node/ip-10-0-142-143.ec2.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:09:53.389 E ns/openshift-monitoring pod/thanos-querier-696f8dfcb9-q6g98 node/ip-10-0-142-143.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/26 10:56:40 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/26 10:56:40 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/26 10:56:40 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/26 10:56:40 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/26 10:56:40 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/26 10:56:40 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/26 10:56:40 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/26 10:56:40 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/26 10:56:40 http.go:96: HTTPS: listening on [::]:9091\n
Feb 26 11:09:53.445 E ns/openshift-authentication pod/oauth-openshift-69c8c74458-zjnhd node/ip-10-0-154-226.ec2.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:09:55.309 E ns/openshift-machine-api pod/machine-api-operator-6769f9fbfc-k8j9p node/ip-10-0-154-226.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 26 11:09:58.402 E ns/openshift-machine-config-operator pod/machine-config-operator-7dd585445f-mcqf7 node/ip-10-0-154-226.ec2.internal container=machine-config-operator container exited with code 2 (Error): nfig...\nE0226 11:06:18.864608       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"64a1d490-fff0-44e4-b4ff-e3e464affb36", ResourceVersion:"30058", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718309922, loc:(*time.Location)(0x271c9e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-7dd585445f-mcqf7_cdb5a3f6-0282-4dc6-8397-1c33c05f363b\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-02-26T11:06:18Z\",\"renewTime\":\"2020-02-26T11:06:18Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-7dd585445f-mcqf7_cdb5a3f6-0282-4dc6-8397-1c33c05f363b became leader'\nI0226 11:06:18.864691       1 leaderelection.go:251] successfully acquired lease openshift-machine-config-operator/machine-config\nI0226 11:06:19.390716       1 operator.go:246] Starting MachineConfigOperator\nI0226 11:06:19.396216       1 event.go:255] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"c5965bda-738c-4fda-b339-86f7897d9025", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-02-26-100751}] to [{operator 0.0.1-2020-02-26-101225}]\n
Feb 26 11:09:59.520 E ns/openshift-service-ca pod/apiservice-cabundle-injector-6f74f465bc-sfq4p node/ip-10-0-154-226.ec2.internal container=apiservice-cabundle-injector-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:10:00.415 E ns/openshift-machine-config-operator pod/machine-config-server-q9bfd node/ip-10-0-138-1.ec2.internal container=machine-config-server container exited with code 2 (Error): I0226 10:33:37.478873       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-165-g1452cf64-dirty (1452cf640f3d96989ebbc88339c847b553c8fe3c)\nI0226 10:33:37.480112       1 api.go:51] Launching server on :22624\nI0226 10:33:37.480194       1 api.go:51] Launching server on :22623\nI0226 10:34:21.900663       1 api.go:97] Pool worker requested by 10.0.148.39:12561\n
Feb 26 11:10:00.902 E ns/openshift-operator-lifecycle-manager pod/packageserver-647f975f79-p9xxv node/ip-10-0-138-228.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:10:21.101 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Feb 26 11:10:22.811 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-96.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-26T11:10:16.406Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-26T11:10:16.411Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-26T11:10:16.412Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-26T11:10:16.414Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-26T11:10:16.414Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-26T11:10:16.417Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-26T11:10:16.417Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-26
Feb 26 11:11:08.401 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:11:41.141 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Feb 26 11:11:44.053 E ns/openshift-cluster-node-tuning-operator pod/tuned-xrfs2 node/ip-10-0-138-1.ec2.internal container=tuned container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:11:53.402 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:12:28.398 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-226.ec2.internal node/ip-10-0-154-226.ec2.internal container=scheduler container exited with code 2 (Error): 21 +0000 UTC))\nI0226 10:56:34.181232       1 tlsconfig.go:179] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1582713138" [] issuer="kubelet-signer" (2020-02-26 10:32:18 +0000 UTC to 2020-02-27 10:13:43 +0000 UTC (now=2020-02-26 10:56:34.181215397 +0000 UTC))\nI0226 10:56:34.181271       1 tlsconfig.go:179] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-02-26 10:13:40 +0000 UTC to 2020-02-27 10:13:40 +0000 UTC (now=2020-02-26 10:56:34.181253559 +0000 UTC))\nI0226 10:56:34.181504       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1582713141" (2020-02-26 10:32:39 +0000 UTC to 2022-02-25 10:32:40 +0000 UTC (now=2020-02-26 10:56:34.181490764 +0000 UTC))\nI0226 10:56:34.181710       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582714594" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582714593" (2020-02-26 09:56:33 +0000 UTC to 2021-02-25 09:56:33 +0000 UTC (now=2020-02-26 10:56:34.181699097 +0000 UTC))\nI0226 10:56:34.181807       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1582714594" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582714593" (2020-02-26 09:56:33 +0000 UTC to 2021-02-25 09:56:33 +0000 UTC (now=2020-02-26 10:56:34.18179451 +0000 UTC))\n
Feb 26 11:12:28.493 E ns/openshift-monitoring pod/node-exporter-pvmkh node/ip-10-0-154-226.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:56:35Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:35Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 11:12:28.533 E ns/openshift-cluster-node-tuning-operator pod/tuned-bgjvg node/ip-10-0-154-226.ec2.internal container=tuned container exited with code 143 (Error): /revision-pruner-5-ip-10-0-154-226.ec2.internal) labels changed node wide: false\nI0226 11:09:55.636520   71154 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-2-ip-10-0-154-226.ec2.internal) labels changed node wide: false\nI0226 11:09:55.998266   71154 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-6-ip-10-0-154-226.ec2.internal) labels changed node wide: false\nI0226 11:09:56.198835   71154 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-3-ip-10-0-154-226.ec2.internal) labels changed node wide: false\nI0226 11:09:56.402924   71154 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-7-ip-10-0-154-226.ec2.internal) labels changed node wide: false\nI0226 11:09:56.600086   71154 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-6-ip-10-0-154-226.ec2.internal) labels changed node wide: false\nI0226 11:09:56.800387   71154 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-154-226.ec2.internal) labels changed node wide: true\nI0226 11:09:58.963260   71154 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:09:58.971039   71154 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:09:59.167643   71154 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 11:10:06.994049   71154 openshift-tuned.go:550] Pod (openshift-console/console-b4847ddb-rbl69) labels changed node wide: true\nI0226 11:10:08.962340   71154 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:10:08.964165   71154 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:10:09.169977   71154 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 11:10:09.171882   71154 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-154-226.ec2.internal) labels changed node wide: true\n
Feb 26 11:12:28.560 E ns/openshift-sdn pod/sdn-controller-47kvc node/ip-10-0-154-226.ec2.internal container=sdn-controller container exited with code 2 (Error): I0226 10:59:48.997261       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0226 10:59:49.044186       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"a708bed4-0379-443b-a5c1-5b326f194dff", ResourceVersion:"26348", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718309713, loc:(*time.Location)(0x2b7dcc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-154-226\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-26T10:28:33Z\",\"renewTime\":\"2020-02-26T10:59:48Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-154-226 became leader'\nI0226 10:59:49.044301       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0226 10:59:49.062154       1 master.go:51] Initializing SDN master\nI0226 10:59:49.203485       1 network_controller.go:60] Started OpenShift Network Controller\n
Feb 26 11:12:28.592 E ns/openshift-controller-manager pod/controller-manager-cl89s node/ip-10-0-154-226.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 26 11:12:28.615 E ns/openshift-sdn pod/ovs-nm6ql node/ip-10-0-154-226.ec2.internal container=openvswitch container exited with code 143 (Error): -26T11:09:53.025Z|00183|bridge|INFO|bridge br0: deleted interface vethe190c309 on port 6\n2020-02-26T11:09:54.064Z|00184|connmgr|INFO|br0<->unix#622: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:09:54.094Z|00185|connmgr|INFO|br0<->unix#625: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:09:54.117Z|00186|bridge|INFO|bridge br0: deleted interface veth9f31d465 on port 18\n2020-02-26T11:09:54.905Z|00187|connmgr|INFO|br0<->unix#630: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:09:54.938Z|00188|connmgr|INFO|br0<->unix#633: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:09:54.964Z|00189|bridge|INFO|bridge br0: deleted interface veth534765a6 on port 5\n2020-02-26T11:09:55.323Z|00190|connmgr|INFO|br0<->unix#636: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:09:55.360Z|00191|connmgr|INFO|br0<->unix#639: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:09:55.386Z|00192|bridge|INFO|bridge br0: deleted interface veth3bc56d85 on port 16\n2020-02-26T11:09:57.522Z|00193|connmgr|INFO|br0<->unix#642: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:09:57.570Z|00194|connmgr|INFO|br0<->unix#645: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:09:57.610Z|00195|bridge|INFO|bridge br0: deleted interface vethf83ad307 on port 20\n2020-02-26T11:09:58.411Z|00196|connmgr|INFO|br0<->unix#648: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:09:58.463Z|00197|connmgr|INFO|br0<->unix#651: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:09:58.500Z|00198|bridge|INFO|bridge br0: deleted interface veth8f55e45a on port 11\n2020-02-26T11:09:57.577Z|00022|jsonrpc|WARN|unix#561: send error: Broken pipe\n2020-02-26T11:09:57.577Z|00023|reconnect|WARN|unix#561: connection dropped (Broken pipe)\n2020-02-26T11:09:58.702Z|00199|connmgr|INFO|br0<->unix#654: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:09:58.754Z|00200|connmgr|INFO|br0<->unix#657: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:09:58.793Z|00201|bridge|INFO|bridge br0: deleted interface veth73ab4ad0 on port 15\nTerminated\n
Feb 26 11:12:28.650 E ns/openshift-multus pod/multus-sxjbh node/ip-10-0-154-226.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 26 11:12:28.663 E ns/openshift-multus pod/multus-admission-controller-kmzmj node/ip-10-0-154-226.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 26 11:12:28.691 E ns/openshift-machine-config-operator pod/machine-config-server-l644t node/ip-10-0-154-226.ec2.internal container=machine-config-server container exited with code 2 (Error): I0226 11:09:58.521883       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-165-g1452cf64-dirty (1452cf640f3d96989ebbc88339c847b553c8fe3c)\nI0226 11:09:58.524342       1 api.go:51] Launching server on :22624\nI0226 11:09:58.524406       1 api.go:51] Launching server on :22623\n
Feb 26 11:12:28.714 E ns/openshift-machine-config-operator pod/machine-config-daemon-rkj4p node/ip-10-0-154-226.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 26 11:12:28.726 E ns/openshift-cluster-version pod/cluster-version-operator-7ccc5fb87f-hx4qx node/ip-10-0-154-226.ec2.internal container=cluster-version-operator container exited with code 255 (Error): g "system:openshift:operator:kube-storage-version-migrator-operator" (127 of 508)\nI0226 11:10:09.303046       1 sync_worker.go:621] Running sync for kubestorageversionmigrator "cluster" (128 of 508)\nI0226 11:10:09.369115       1 start.go:140] Shutting down due to terminated\nI0226 11:10:09.369375       1 task_graph.go:583] Canceled worker 6\nI0226 11:10:09.369466       1 start.go:188] Stepping down as leader\nI0226 11:10:09.369782       1 cvo.go:392] Started syncing cluster version "openshift-cluster-version/version" (2020-02-26 11:10:09.369772382 +0000 UTC m=+10.416348040)\nI0226 11:10:09.370159       1 cvo.go:421] Desired version from spec is v1.Update{Version:"", Image:"registry.svc.ci.openshift.org/ci-op-0dc7lsdl/release@sha256:42b6d507a321bab2c25410ad2eff21347fb60677ad3e2372a70ff1de54ffb327", Force:true}\nI0226 11:10:09.369809       1 task_graph.go:583] Canceled worker 12\nI0226 11:10:09.369822       1 task_graph.go:583] Canceled worker 3\nI0226 11:10:09.369832       1 task_graph.go:583] Canceled worker 5\nI0226 11:10:09.369841       1 task_graph.go:583] Canceled worker 2\nI0226 11:10:09.369851       1 task_graph.go:583] Canceled worker 4\nI0226 11:10:09.369861       1 task_graph.go:583] Canceled worker 8\nI0226 11:10:09.369869       1 task_graph.go:583] Canceled worker 10\nI0226 11:10:09.369879       1 task_graph.go:583] Canceled worker 13\nI0226 11:10:09.369888       1 task_graph.go:583] Canceled worker 0\nI0226 11:10:09.369897       1 task_graph.go:583] Canceled worker 11\nI0226 11:10:09.369928       1 task_graph.go:583] Canceled worker 1\nI0226 11:10:09.369939       1 task_graph.go:583] Canceled worker 9\nI0226 11:10:09.369948       1 task_graph.go:583] Canceled worker 14\nI0226 11:10:09.369958       1 task_graph.go:583] Canceled worker 15\nI0226 11:10:09.380382       1 cvo.go:394] Finished syncing cluster version "openshift-cluster-version/version" (10.59932ms)\nI0226 11:10:09.380495       1 cvo.go:319] Shutting down ClusterVersionOperator\nF0226 11:10:09.387937       1 start.go:148] Received shutdown signal twice, exiting\n
Feb 26 11:12:31.882 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-226.ec2.internal node/ip-10-0-154-226.ec2.internal container=kube-apiserver-7 container exited with code 1 (Error): server: mvcc: required revision has been compacted\nE0226 11:10:08.972602       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:10:08.972682       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:10:08.980072       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:10:09.033802       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:10:09.034045       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:10:09.034071       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:10:09.034211       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:10:09.034236       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:10:09.034377       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0226 11:10:09.195095       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-154-226.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0226 11:10:09.195256       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nW0226 11:10:09.228978       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.138.1 10.0.138.228]\nI0226 11:10:09.235288       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-154-226.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n
Feb 26 11:12:31.882 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-226.ec2.internal node/ip-10-0-154-226.ec2.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0226 10:56:24.780139       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 26 11:12:31.882 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-226.ec2.internal node/ip-10-0-154-226.ec2.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0226 11:06:30.245588       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:06:30.246276       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0226 11:06:30.462261       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:06:30.462788       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 26 11:12:31.939 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-226.ec2.internal node/ip-10-0-154-226.ec2.internal container=cluster-policy-controller-8 container exited with code 1 (Error): I0226 10:56:28.045978       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0226 10:56:28.055480       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0226 10:56:28.056744       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Feb 26 11:12:31.939 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-226.ec2.internal node/ip-10-0-154-226.ec2.internal container=kube-controller-manager-cert-syncer-8 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:08:49.921305       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:08:49.921722       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:08:59.930969       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:08:59.931325       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:09:09.941997       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:09:09.942331       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:09:19.951578       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:09:19.952292       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:09:29.961476       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:09:29.961835       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:09:39.971489       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:09:39.972038       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:09:49.980855       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:09:49.981873       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:09:59.997399       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:09:59.997835       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Feb 26 11:12:31.939 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-226.ec2.internal node/ip-10-0-154-226.ec2.internal container=kube-controller-manager-8 container exited with code 2 (Error): bject has been modified; please apply your changes to the latest version and try again\nI0226 11:10:09.027161       1 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openshift-etcd", Name:"etcd", UID:"12f66fab-cba3-4ffc-afc9-a3e6c19e533f", APIVersion:"v1", ResourceVersion:"1812", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0226 11:10:09.045837       1 endpoints_controller.go:340] Error syncing endpoints for service "openshift-etcd/etcd", retrying. Error: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0226 11:10:09.046573       1 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openshift-etcd", Name:"etcd", UID:"12f66fab-cba3-4ffc-afc9-a3e6c19e533f", APIVersion:"v1", ResourceVersion:"1812", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0226 11:10:09.063873       1 endpoints_controller.go:340] Error syncing endpoints for service "openshift-etcd/etcd", retrying. Error: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0226 11:10:09.063958       1 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openshift-etcd", Name:"etcd", UID:"12f66fab-cba3-4ffc-afc9-a3e6c19e533f", APIVersion:"v1", ResourceVersion:"32061", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\n
Feb 26 11:12:32.989 E ns/openshift-multus pod/multus-sxjbh node/ip-10-0-154-226.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 26 11:12:35.479 E ns/openshift-multus pod/multus-sxjbh node/ip-10-0-154-226.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 26 11:12:37.123 E ns/openshift-cluster-node-tuning-operator pod/tuned-k9zq8 node/ip-10-0-142-143.ec2.internal container=tuned container exited with code 143 (Error): ps-deployment-upgrade-554/dp-657fc4b57d-pd2jw) labels changed node wide: true\nI0226 11:09:53.242549     847 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:09:53.244065     847 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:09:53.375666     847 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 11:09:56.371356     847 openshift-tuned.go:550] Pod (openshift-marketplace/community-operators-d945859db-7fgnd) labels changed node wide: true\nI0226 11:09:58.242575     847 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:09:58.244144     847 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:09:58.371687     847 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 11:10:06.364640     847 openshift-tuned.go:550] Pod (openshift-marketplace/certified-operators-597944b69c-mz5bj) labels changed node wide: true\nI0226 11:10:08.242613     847 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:10:08.244251     847 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:10:08.359470     847 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 11:10:23.303828     847 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-4170/foo-rm8vt) labels changed node wide: true\nI0226 11:10:28.242619     847 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:10:28.244344     847 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:10:28.359033     847 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 11:10:46.344939     847 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-696f8dfcb9-q6g98) labels changed node wide: true\n
Feb 26 11:12:37.138 E ns/openshift-monitoring pod/node-exporter-jcxxs node/ip-10-0-142-143.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:56:59Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:59Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 11:12:37.162 E ns/openshift-multus pod/multus-jfcmg node/ip-10-0-142-143.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 26 11:12:37.187 E ns/openshift-machine-config-operator pod/machine-config-daemon-wzln8 node/ip-10-0-142-143.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 26 11:12:39.823 E ns/openshift-multus pod/multus-jfcmg node/ip-10-0-142-143.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 26 11:12:44.607 E ns/openshift-machine-config-operator pod/machine-config-daemon-rkj4p node/ip-10-0-154-226.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 26 11:12:45.753 E ns/openshift-machine-config-operator pod/machine-config-daemon-wzln8 node/ip-10-0-142-143.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 26 11:12:48.647 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 26 11:12:54.234 E ns/openshift-ingress pod/router-default-74c9845497-5rl8r node/ip-10-0-147-70.ec2.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:11:24.549062       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:11:29.538099       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:11:39.035357       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:11:44.034048       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:12:04.594130       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:12:09.587511       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:12:33.292459       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:12:38.290388       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:12:43.290388       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0226 11:12:48.292723       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 26 11:12:54.319 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/26 10:57:01 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 26 11:12:54.319 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/26 10:57:02 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/26 10:57:02 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/26 10:57:02 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/26 10:57:02 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/26 10:57:02 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/26 10:57:02 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/26 10:57:02 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/26 10:57:02 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/26 10:57:02 http.go:96: HTTPS: listening on [::]:9091\n2020/02/26 10:58:26 oauthproxy.go:774: basicauth: 10.128.2.24:58002 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/26 11:02:57 oauthproxy.go:774: basicauth: 10.128.2.24:32920 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/26 11:07:27 oauthproxy.go:774: basicauth: 10.128.2.24:36028 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/26 11:12:07 oauthproxy.go:774: basicauth: 10.128.2.24:39942 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 26 11:12:54.319 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-147-70.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-26T10:57:00.566299228Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2020-02-26T10:57:00.566421997Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-26T10:57:00.568219588Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-26T10:57:05.728393315Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 26 11:12:54.371 E ns/openshift-monitoring pod/thanos-querier-696f8dfcb9-p9b52 node/ip-10-0-147-70.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/26 10:56:17 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/26 10:56:17 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/26 10:56:17 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/26 10:56:17 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/26 10:56:17 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/26 10:56:17 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/26 10:56:17 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/26 10:56:17 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/26 10:56:17 http.go:96: HTTPS: listening on [::]:9091\n
Feb 26 11:12:54.403 E ns/openshift-monitoring pod/prometheus-adapter-778b9dd58d-b8kfn node/ip-10-0-147-70.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0226 10:57:00.516450       1 adapter.go:93] successfully using in-cluster auth\nI0226 10:57:00.959581       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 26 11:12:55.519 E ns/openshift-monitoring pod/grafana-64bd8c7c-4zmkd node/ip-10-0-147-70.ec2.internal container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:12:55.519 E ns/openshift-monitoring pod/grafana-64bd8c7c-4zmkd node/ip-10-0-147-70.ec2.internal container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:12:56.104 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-147-70.ec2.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:12:56.104 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-147-70.ec2.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:12:56.104 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-147-70.ec2.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:12:58.111 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-76c5wwfwt node/ip-10-0-138-1.ec2.internal container=operator container exited with code 255 (Error):    1 reflector.go:158] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0226 11:10:34.100807       1 httplog.go:90] GET /metrics: (5.474406ms) 200 [Prometheus/2.14.0 10.128.2.35:37840]\nI0226 11:10:39.137493       1 httplog.go:90] GET /metrics: (1.572971ms) 200 [Prometheus/2.14.0 10.129.2.20:44992]\nI0226 11:11:04.095600       1 httplog.go:90] GET /metrics: (6.170469ms) 200 [Prometheus/2.14.0 10.128.2.35:37840]\nI0226 11:11:09.137576       1 httplog.go:90] GET /metrics: (1.558821ms) 200 [Prometheus/2.14.0 10.129.2.20:44992]\nI0226 11:11:12.922665       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 1 items received\nI0226 11:11:34.096264       1 httplog.go:90] GET /metrics: (6.837228ms) 200 [Prometheus/2.14.0 10.128.2.35:37840]\nI0226 11:11:39.137206       1 httplog.go:90] GET /metrics: (1.368675ms) 200 [Prometheus/2.14.0 10.129.2.20:44992]\nI0226 11:12:04.095693       1 httplog.go:90] GET /metrics: (6.212799ms) 200 [Prometheus/2.14.0 10.128.2.35:37840]\nI0226 11:12:09.137889       1 httplog.go:90] GET /metrics: (1.824727ms) 200 [Prometheus/2.14.0 10.129.2.20:44992]\nI0226 11:12:17.927397       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 1 items received\nI0226 11:12:25.925980       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 1 items received\nI0226 11:12:29.923481       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 1 items received\nI0226 11:12:34.096174       1 httplog.go:90] GET /metrics: (6.660123ms) 200 [Prometheus/2.14.0 10.128.2.35:37840]\nI0226 11:12:39.137636       1 httplog.go:90] GET /metrics: (1.564352ms) 200 [Prometheus/2.14.0 10.129.2.20:44992]\nI0226 11:12:55.703376       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0226 11:12:55.703424       1 leaderelection.go:66] leaderelection lost\n
Feb 26 11:12:59.870 E ns/openshift-monitoring pod/thanos-querier-696f8dfcb9-7d27p node/ip-10-0-138-1.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/26 11:10:06 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/26 11:10:06 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/26 11:10:06 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/26 11:10:06 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/26 11:10:06 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/26 11:10:06 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/26 11:10:06 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/26 11:10:06 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/26 11:10:06 http.go:96: HTTPS: listening on [::]:9091\n
Feb 26 11:13:00.348 E ns/openshift-dns-operator pod/dns-operator-7dcbc8d4fc-hj7sm node/ip-10-0-138-1.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:13:00.444 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-67ccf947cf-mk76s node/ip-10-0-138-1.ec2.internal container=operator container exited with code 255 (Error): 1:11:56.831739       1 httplog.go:90] GET /metrics: (5.81789ms) 200 [Prometheus/2.14.0 10.128.2.35:57498]\nI0226 11:12:00.723901       1 httplog.go:90] GET /metrics: (1.322458ms) 200 [Prometheus/2.14.0 10.129.2.20:44180]\nI0226 11:12:08.317928       1 request.go:538] Throttling request took 149.437285ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0226 11:12:08.517917       1 request.go:538] Throttling request took 196.750078ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0226 11:12:26.832226       1 httplog.go:90] GET /metrics: (6.291129ms) 200 [Prometheus/2.14.0 10.128.2.35:57498]\nI0226 11:12:28.310563       1 request.go:538] Throttling request took 163.308945ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0226 11:12:28.510601       1 request.go:538] Throttling request took 197.014469ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0226 11:12:30.724055       1 httplog.go:90] GET /metrics: (1.480743ms) 200 [Prometheus/2.14.0 10.129.2.20:44180]\nI0226 11:12:48.317900       1 request.go:538] Throttling request took 153.760102ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0226 11:12:48.523051       1 request.go:538] Throttling request took 202.193718ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0226 11:12:56.841907       1 httplog.go:90] GET /metrics: (15.805273ms) 200 [Prometheus/2.14.0 10.128.2.35:57498]\nI0226 11:12:58.548672       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0226 11:12:58.548727       1 leaderelection.go:66] leaderelection lost\n
Feb 26 11:13:01.427 E ns/openshift-console pod/console-b4847ddb-dvgdd node/ip-10-0-138-1.ec2.internal container=console container exited with code 2 (Error): 2020/02/26 10:58:32 cmd/main: cookies are secure!\n2020/02/26 10:58:32 cmd/main: Binding to [::]:8443...\n2020/02/26 10:58:32 cmd/main: using TLS\n
Feb 26 11:13:01.453 E ns/openshift-service-ca pod/apiservice-cabundle-injector-6f74f465bc-g6pgw node/ip-10-0-138-1.ec2.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Feb 26 11:13:01.482 E ns/openshift-service-ca-operator pod/service-ca-operator-86784577d8-w9fg6 node/ip-10-0-138-1.ec2.internal container=operator container exited with code 255 (Error): 
Feb 26 11:13:01.527 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-58bd585cdf-r8bm6 node/ip-10-0-138-1.ec2.internal container=kube-apiserver-operator container exited with code 255 (Error): ult network)\nStaticPodsDegraded: nodes/ip-10-0-154-226.ec2.internal pods/kube-apiserver-ip-10-0-154-226.ec2.internal container=\"kube-apiserver-7\" is not ready"\nI0226 11:12:48.561606       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"bcb9ecff-6750-4737-b6c9-992547d65429", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-154-226.ec2.internal\" not ready since 2020-02-26 11:12:28 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-154-226.ec2.internal pods/kube-apiserver-ip-10-0-154-226.ec2.internal container=\"kube-apiserver-7\" is not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-154-226.ec2.internal pods/kube-apiserver-ip-10-0-154-226.ec2.internal container=\"kube-apiserver-7\" is not ready"\nI0226 11:12:58.431741       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"bcb9ecff-6750-4737-b6c9-992547d65429", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-154-226.ec2.internal pods/kube-apiserver-ip-10-0-154-226.ec2.internal container=\"kube-apiserver-7\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0226 11:12:59.588705       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0226 11:12:59.588866       1 leaderelection.go:66] leaderelection lost\n
Feb 26 11:13:01.567 E ns/openshift-service-ca pod/configmap-cabundle-injector-6cb798f5-4lm2p node/ip-10-0-138-1.ec2.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Feb 26 11:13:02.630 E ns/openshift-service-ca pod/service-serving-cert-signer-7cdb947cd9-qjc9q node/ip-10-0-138-1.ec2.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Feb 26 11:13:04.900 E ns/openshift-authentication pod/oauth-openshift-69c8c74458-n5sc2 node/ip-10-0-138-1.ec2.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:13:33.199 E ns/openshift-authentication pod/oauth-openshift-69c8c74458-dc7w7 node/ip-10-0-154-226.ec2.internal container=oauth-openshift container exited with code 255 (Error): Copying system trust bundle\nF0226 11:13:32.242357       1 cmd.go:49] Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 26 11:13:53.401 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:14:53.401 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:15:04.444 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Alertmanager host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io alertmanager-main)
Feb 26 11:15:07.717 E ns/openshift-operator-lifecycle-manager pod/packageserver-7ffff6745c-rpn5d node/ip-10-0-138-228.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:15:34.518 E ns/openshift-monitoring pod/node-exporter-wvkb6 node/ip-10-0-147-70.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:57:40Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:40Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 11:15:34.564 E ns/openshift-sdn pod/ovs-bxjp9 node/ip-10-0-147-70.ec2.internal container=openvswitch container exited with code 143 (Error):  vethe7fd2d84 on port 12\n2020-02-26T11:12:54.313Z|00150|connmgr|INFO|br0<->unix#647: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:12:54.385Z|00151|connmgr|INFO|br0<->unix#650: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:12:54.421Z|00152|bridge|INFO|bridge br0: deleted interface veth33de10ce on port 10\n2020-02-26T11:12:54.477Z|00153|connmgr|INFO|br0<->unix#653: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:12:54.512Z|00154|connmgr|INFO|br0<->unix#656: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:12:54.552Z|00155|bridge|INFO|bridge br0: deleted interface vethe3635e0a on port 15\n2020-02-26T11:12:54.018Z|00021|jsonrpc|WARN|Dropped 6 log messages in last 699 seconds (most recently, 699 seconds ago) due to excessive rate\n2020-02-26T11:12:54.018Z|00022|jsonrpc|WARN|unix#565: receive error: Connection reset by peer\n2020-02-26T11:12:54.018Z|00023|reconnect|WARN|unix#565: connection dropped (Connection reset by peer)\n2020-02-26T11:12:54.024Z|00024|jsonrpc|WARN|unix#566: receive error: Connection reset by peer\n2020-02-26T11:12:54.024Z|00025|reconnect|WARN|unix#566: connection dropped (Connection reset by peer)\n2020-02-26T11:13:16.800Z|00026|jsonrpc|WARN|unix#604: receive error: Connection reset by peer\n2020-02-26T11:13:16.800Z|00027|reconnect|WARN|unix#604: connection dropped (Connection reset by peer)\n2020-02-26T11:13:23.736Z|00156|connmgr|INFO|br0<->unix#681: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:13:23.763Z|00157|connmgr|INFO|br0<->unix#684: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:13:23.769Z|00028|jsonrpc|WARN|unix#609: receive error: Connection reset by peer\n2020-02-26T11:13:23.769Z|00029|reconnect|WARN|unix#609: connection dropped (Connection reset by peer)\n2020-02-26T11:13:23.774Z|00030|jsonrpc|WARN|unix#610: receive error: Connection reset by peer\n2020-02-26T11:13:23.774Z|00031|reconnect|WARN|unix#610: connection dropped (Connection reset by peer)\n2020-02-26T11:13:23.784Z|00158|bridge|INFO|bridge br0: deleted interface vethb6462795 on port 8\nTerminated\n
Feb 26 11:15:34.582 E ns/openshift-multus pod/multus-wc44k node/ip-10-0-147-70.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 26 11:15:34.635 E ns/openshift-machine-config-operator pod/machine-config-daemon-nl7gd node/ip-10-0-147-70.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 26 11:15:34.658 E ns/openshift-cluster-node-tuning-operator pod/tuned-n6kcr node/ip-10-0-147-70.ec2.internal container=tuned container exited with code 143 (Error): uned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-26 11:12:13,267 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-26 11:12:13,270 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-26 11:12:13,271 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-26 11:12:13,273 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-26 11:12:13,378 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-26 11:12:13,389 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0226 11:12:55.634911    1679 openshift-tuned.go:550] Pod (openshift-ingress/router-default-74c9845497-5rl8r) labels changed node wide: true\nI0226 11:12:57.940090    1679 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:12:57.944609    1679 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:12:58.057382    1679 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 11:13:06.899608    1679 openshift-tuned.go:550] Pod (openshift-monitoring/alertmanager-main-1) labels changed node wide: true\nI0226 11:13:07.940105    1679 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:13:07.942460    1679 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:13:08.072545    1679 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 11:13:19.409492    1679 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0226 11:13:19.413335    1679 openshift-tuned.go:881] Pod event watch channel closed.\nI0226 11:13:19.413354    1679 openshift-tuned.go:883] Increasing resyncPeriod to 134\nI0226 11:13:48.079236    1679 openshift-tuned.go:137] Received signal: terminated\n
Feb 26 11:15:37.264 E ns/openshift-multus pod/multus-wc44k node/ip-10-0-147-70.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 26 11:15:42.838 E ns/openshift-machine-config-operator pod/machine-config-daemon-nl7gd node/ip-10-0-147-70.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 26 11:15:43.036 E ns/openshift-controller-manager pod/controller-manager-xx852 node/ip-10-0-138-1.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 26 11:15:43.195 E ns/openshift-monitoring pod/node-exporter-mwqbl node/ip-10-0-138-1.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:57:45Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:45Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 11:15:43.222 E ns/openshift-sdn pod/sdn-controller-hwcbd node/ip-10-0-138-1.ec2.internal container=sdn-controller container exited with code 2 (Error): I0226 11:00:01.026412       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 26 11:15:43.245 E ns/openshift-multus pod/multus-admission-controller-v85f6 node/ip-10-0-138-1.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 26 11:15:43.315 E ns/openshift-multus pod/multus-dnx2t node/ip-10-0-138-1.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 26 11:15:43.389 E ns/openshift-machine-config-operator pod/machine-config-daemon-5sg8v node/ip-10-0-138-1.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 26 11:15:43.405 E ns/openshift-cluster-node-tuning-operator pod/tuned-nqtkq node/ip-10-0-138-1.ec2.internal container=tuned container exited with code 143 (Error): ed node wide: false\nI0226 11:12:58.752887  106032 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-2-ip-10-0-138-1.ec2.internal) labels changed node wide: false\nI0226 11:12:59.037900  106032 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-4-ip-10-0-138-1.ec2.internal) labels changed node wide: false\nI0226 11:12:59.261928  106032 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-5-ip-10-0-138-1.ec2.internal) labels changed node wide: false\nI0226 11:12:59.832264  106032 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-5-ip-10-0-138-1.ec2.internal) labels changed node wide: false\nI0226 11:13:00.055541  106032 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-6-ip-10-0-138-1.ec2.internal) labels changed node wide: false\nI0226 11:13:00.093638  106032 openshift-tuned.go:550] Pod (openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator-76c5wwfwt) labels changed node wide: true\nI0226 11:13:03.154502  106032 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:13:03.158492  106032 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:13:03.484606  106032 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 11:13:08.231258  106032 openshift-tuned.go:550] Pod (openshift-service-ca/apiservice-cabundle-injector-6f74f465bc-g6pgw) labels changed node wide: true\nI0226 11:13:13.154473  106032 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:13:13.155997  106032 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:13:13.295026  106032 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 11:13:18.229470  106032 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-69c8c74458-n5sc2) labels changed node wide: true\n
Feb 26 11:15:43.436 E ns/openshift-machine-config-operator pod/machine-config-server-4b54x node/ip-10-0-138-1.ec2.internal container=machine-config-server container exited with code 2 (Error): I0226 11:10:10.294722       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-165-g1452cf64-dirty (1452cf640f3d96989ebbc88339c847b553c8fe3c)\nI0226 11:10:10.297090       1 api.go:51] Launching server on :22624\nI0226 11:10:10.297284       1 api.go:51] Launching server on :22623\n
Feb 26 11:15:43.492 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-1.ec2.internal node/ip-10-0-138-1.ec2.internal container=cluster-policy-controller-8 container exited with code 1 (Error): I0226 10:53:28.595118       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0226 10:53:28.599476       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0226 10:53:28.599618       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Feb 26 11:15:43.492 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-1.ec2.internal node/ip-10-0-138-1.ec2.internal container=kube-controller-manager-cert-syncer-8 container exited with code 2 (Error): ca-bundle true}]\nI0226 11:12:20.133676       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:12:30.140934       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:12:30.141262       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:12:40.153358       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:12:40.153707       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:12:50.163360       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:12:50.163683       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:13:00.171321       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:13:00.171906       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:13:10.181719       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:13:10.182555       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nE0226 11:13:19.423127       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=30178&timeout=7m53s&timeoutSeconds=473&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:13:19.423292       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=34661&timeout=7m55s&timeoutSeconds=475&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Feb 26 11:15:43.492 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-1.ec2.internal node/ip-10-0-138-1.ec2.internal container=kube-controller-manager-8 container exited with code 2 (Error): pagation policy Background\nI0226 11:13:12.471055       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-55d755bb6d, need 1, creating 1\nI0226 11:13:12.471398       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"ab57ee20-02a7-46ec-ba3a-b3b4b95c29d0", APIVersion:"apps/v1", ResourceVersion:"34734", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-55d755bb6d to 1\nI0226 11:13:12.494408       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\nI0226 11:13:12.496291       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-55d755bb6d", UID:"0fce96e9-80b3-48ed-9845-8ac152c9a210", APIVersion:"apps/v1", ResourceVersion:"34735", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-55d755bb6d-vxzmv\nI0226 11:13:13.471895       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-54f544f44f, need 3, creating 1\nI0226 11:13:13.487510       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-54f544f44f", UID:"82bca0ab-8569-4612-b30e-0279ac1e3b5c", APIVersion:"apps/v1", ResourceVersion:"34621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-54f544f44f-gcsjj\nI0226 11:13:13.516876       1 deployment_controller.go:484] Error syncing deployment openshift-machine-config-operator/etcd-quorum-guard: Operation cannot be fulfilled on deployments.apps "etcd-quorum-guard": the object has been modified; please apply your changes to the latest version and try again\n
Feb 26 11:15:43.505 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-138-1.ec2.internal node/ip-10-0-138-1.ec2.internal container=scheduler container exited with code 2 (Error): nknown (get services)\nE0226 10:54:36.260994       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0226 10:54:36.282936       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0226 10:54:36.283072       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0226 10:54:36.292874       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0226 10:54:36.292954       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: unknown (get pods)\nW0226 10:54:36.309284       1 reflector.go:299] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 20289 (21046)\nE0226 10:54:36.310025       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)\nE0226 10:54:36.310121       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: unknown (get csinodes.storage.k8s.io)\nW0226 10:54:36.333479       1 reflector.go:299] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 20289 (21046)\nW0226 10:54:36.376306       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: too old resource version: 17373 (21046)\nW0226 10:54:36.386805       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 18251 (21046)\nW0226 11:10:10.052453       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 21045 (32083)\n
Feb 26 11:15:43.539 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-1.ec2.internal node/ip-10-0-138-1.ec2.internal container=kube-apiserver-7 container exited with code 1 (Error): ver: mvcc: required revision has been compacted\nE0226 11:13:18.978415       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.978567       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.978767       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.978912       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.979101       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.986331       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.986441       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.986466       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.986486       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.986585       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.986659       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.986662       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:13:18.986706       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0226 11:13:19.262623       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-138-1.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0226 11:13:19.262959       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Feb 26 11:15:43.539 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-1.ec2.internal node/ip-10-0-138-1.ec2.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0226 10:53:27.530423       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 26 11:15:43.539 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-1.ec2.internal node/ip-10-0-138-1.ec2.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0226 11:04:37.295395       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:04:37.295795       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0226 11:04:37.502273       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:04:37.502585       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 26 11:15:47.162 E ns/openshift-multus pod/multus-dnx2t node/ip-10-0-138-1.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 26 11:15:51.680 E ns/openshift-monitoring pod/telemeter-client-6c48544f9f-2n4j2 node/ip-10-0-129-96.ec2.internal container=reload container exited with code 2 (Error): 
Feb 26 11:15:51.680 E ns/openshift-monitoring pod/telemeter-client-6c48544f9f-2n4j2 node/ip-10-0-129-96.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Feb 26 11:15:52.824 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-96.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-26T11:10:16.406Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-26T11:10:16.411Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-26T11:10:16.412Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-26T11:10:16.414Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-26T11:10:16.414Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-26T11:10:16.414Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-26T11:10:16.417Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-26T11:10:16.417Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-26
Feb 26 11:15:52.824 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-96.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/26 11:10:18 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 26 11:15:52.824 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-96.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/26 11:10:20 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/26 11:10:20 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/26 11:10:20 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/26 11:10:50 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/26 11:10:50 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/26 11:10:50 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/26 11:10:50 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/26 11:10:50 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/26 11:10:50 http.go:96: HTTPS: listening on [::]:9091\n2020/02/26 11:14:30 oauthproxy.go:774: basicauth: 10.130.0.47:60704 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 26 11:15:52.824 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-96.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-26T11:10:17.899468713Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2020-02-26T11:10:17.89960795Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-26T11:10:17.906469335Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-02-26T11:10:22.908704311Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-26T11:10:28.032216107Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 26 11:15:52.866 E ns/openshift-machine-config-operator pod/machine-config-daemon-5sg8v node/ip-10-0-138-1.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 26 11:15:52.915 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 26 11:15:53.906 E ns/openshift-marketplace pod/community-operators-847f9c9ff9-l42vt node/ip-10-0-129-96.ec2.internal container=community-operators container exited with code 2 (Error): 
Feb 26 11:15:56.609 E ns/openshift-marketplace pod/redhat-operators-569b45dfff-m5dds node/ip-10-0-147-70.ec2.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:16:02.609 E ns/openshift-authentication-operator pod/authentication-operator-6db88f8bc7-gsmmk node/ip-10-0-138-228.ec2.internal container=operator container exited with code 255 (Error): False","type":"Degraded"},{"lastTransitionTime":"2020-02-26T11:10:02Z","message":"Progressing: not all deployment replicas are ready","reason":"ProgressingOAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-02-26T10:45:44Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-26T10:37:23Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0226 11:14:59.452811       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"1e17dc2e-9f01-4dde-a85e-28cff99e8fab", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteStatusDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)\nOAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-browser-client)" to "OAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-browser-client)"\nE0226 11:15:02.495787       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-browser-client)\nE0226 11:15:05.568271       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nE0226 11:15:08.638974       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)\nI0226 11:16:00.205790       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0226 11:16:00.205939       1 leaderelection.go:66] leaderelection lost\n
Feb 26 11:16:02.713 E ns/openshift-cluster-machine-approver pod/machine-approver-796fc9578f-c2c6h node/ip-10-0-138-228.ec2.internal container=machine-approver-controller container exited with code 2 (Error): 0:55:47.565206       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0226 10:55:47.565245       1 main.go:236] Starting Machine Approver\nI0226 10:55:47.665527       1 main.go:146] CSR csr-6t7xn added\nI0226 10:55:47.665683       1 main.go:149] CSR csr-6t7xn is already approved\nI0226 10:55:47.665710       1 main.go:146] CSR csr-74bbv added\nI0226 10:55:47.665717       1 main.go:149] CSR csr-74bbv is already approved\nI0226 10:55:47.665737       1 main.go:146] CSR csr-g57q9 added\nI0226 10:55:47.665744       1 main.go:149] CSR csr-g57q9 is already approved\nI0226 10:55:47.665752       1 main.go:146] CSR csr-lbq54 added\nI0226 10:55:47.665762       1 main.go:149] CSR csr-lbq54 is already approved\nI0226 10:55:47.665773       1 main.go:146] CSR csr-v5bx8 added\nI0226 10:55:47.665783       1 main.go:149] CSR csr-v5bx8 is already approved\nI0226 10:55:47.665797       1 main.go:146] CSR csr-vw9bq added\nI0226 10:55:47.665806       1 main.go:149] CSR csr-vw9bq is already approved\nI0226 10:55:47.665814       1 main.go:146] CSR csr-c8s7x added\nI0226 10:55:47.665820       1 main.go:149] CSR csr-c8s7x is already approved\nI0226 10:55:47.665827       1 main.go:146] CSR csr-jp4st added\nI0226 10:55:47.665833       1 main.go:149] CSR csr-jp4st is already approved\nI0226 10:55:47.665840       1 main.go:146] CSR csr-nsg2p added\nI0226 10:55:47.665846       1 main.go:149] CSR csr-nsg2p is already approved\nI0226 10:55:47.665853       1 main.go:146] CSR csr-pxjgc added\nI0226 10:55:47.665859       1 main.go:149] CSR csr-pxjgc is already approved\nI0226 10:55:47.665866       1 main.go:146] CSR csr-w8wwr added\nI0226 10:55:47.665871       1 main.go:149] CSR csr-w8wwr is already approved\nI0226 10:55:47.665886       1 main.go:146] CSR csr-zz6n9 added\nI0226 10:55:47.665894       1 main.go:149] CSR csr-zz6n9 is already approved\nW0226 11:10:10.011562       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 20274 (32082)\n
Feb 26 11:16:04.986 E ns/openshift-insights pod/insights-operator-7c44b6f476-x6qr9 node/ip-10-0-138-228.ec2.internal container=operator container exited with code 2 (Error): 0.tar.gz\nI0226 11:14:30.910695       1 diskrecorder.go:134] Wrote 40 records to disk in 4ms\nI0226 11:14:30.910722       1 periodic.go:151] Periodic gather config completed in 10.124s\nI0226 11:14:37.566730       1 diskrecorder.go:303] Found files to send: [/var/lib/insights-operator/insights-2020-02-26-111430.tar.gz]\nI0226 11:14:37.566797       1 insightsuploader.go:126] Uploading latest report since 2020-02-26T10:56:17Z\nI0226 11:14:37.576300       1 insightsclient.go:163] Uploading application/vnd.redhat.openshift.periodic to https://cloud.redhat.com/api/ingress/v1/upload\nI0226 11:14:40.797307       1 httplog.go:90] GET /metrics: (6.296074ms) 200 [Prometheus/2.14.0 10.128.2.35:47522]\nI0226 11:14:46.122744       1 httplog.go:90] GET /metrics: (2.621695ms) 200 [Prometheus/2.14.0 10.131.0.9:40748]\nI0226 11:14:57.661013       1 insightsclient.go:211] Successfully reported id=2020-02-26T11:14:37Z x-rh-insights-request-id=04f7bc02ce4a418ca8dacb3325f22eca, wrote=21053\nI0226 11:14:57.661050       1 insightsuploader.go:150] Uploaded report successfully in 20.094257551s\nI0226 11:14:57.667313       1 status.go:298] The operator is healthy\nI0226 11:15:10.797821       1 httplog.go:90] GET /metrics: (6.756842ms) 200 [Prometheus/2.14.0 10.128.2.35:47522]\nI0226 11:15:16.123307       1 httplog.go:90] GET /metrics: (3.261593ms) 200 [Prometheus/2.14.0 10.131.0.9:40748]\nI0226 11:15:40.798448       1 httplog.go:90] GET /metrics: (7.321623ms) 200 [Prometheus/2.14.0 10.128.2.35:47522]\nI0226 11:15:46.122032       1 httplog.go:90] GET /metrics: (2.005443ms) 200 [Prometheus/2.14.0 10.131.0.9:40748]\nI0226 11:16:02.039381       1 configobserver.go:65] Refreshing configuration from cluster pull secret\nI0226 11:16:02.056651       1 configobserver.go:90] Found cloud.openshift.com token\nI0226 11:16:02.056769       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0226 11:16:02.147909       1 status.go:298] The operator is healthy\nI0226 11:16:02.148373       1 status.go:373] No status update necessary, objects are identical\n
Feb 26 11:16:05.451 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7db6db4bb7-gz2j5 node/ip-10-0-138-228.ec2.internal container=kube-scheduler-operator-container container exited with code 255 (Error): olumeClaim: unknown (get persistentvolumeclaims)\\nE0226 10:54:36.292874       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\\nE0226 10:54:36.292954       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: unknown (get pods)\\nW0226 10:54:36.309284       1 reflector.go:299] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 20289 (21046)\\nE0226 10:54:36.310025       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)\\nE0226 10:54:36.310121       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: unknown (get csinodes.storage.k8s.io)\\nW0226 10:54:36.333479       1 reflector.go:299] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 20289 (21046)\\nW0226 10:54:36.376306       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: too old resource version: 17373 (21046)\\nW0226 10:54:36.386805       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 18251 (21046)\\nW0226 11:10:10.052453       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 21045 (32083)\\n\"\nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-138-1.ec2.internal pods/openshift-kube-scheduler-ip-10-0-138-1.ec2.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0226 11:16:00.824982       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0226 11:16:00.825052       1 leaderelection.go:66] leaderelection lost\n
Feb 26 11:16:05.519 E ns/openshift-console-operator pod/console-operator-55586fd49f-w6kff node/ip-10-0-138-228.ec2.internal container=console-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 26 11:16:06.567 E ns/openshift-machine-api pod/machine-api-controllers-84774f9657-r6qw4 node/ip-10-0-138-228.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 26 11:16:06.604 E ns/openshift-machine-config-operator pod/machine-config-operator-7dd585445f-xmpfm node/ip-10-0-138-228.ec2.internal container=machine-config-operator container exited with code 2 (Error): ...\nE0226 11:12:02.119718       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"64a1d490-fff0-44e4-b4ff-e3e464affb36", ResourceVersion:"32892", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718309922, loc:(*time.Location)(0x271c9e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-7dd585445f-xmpfm_25360831-2060-4ff1-a2f8-e22549f8b070\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-02-26T11:12:02Z\",\"renewTime\":\"2020-02-26T11:12:02Z\",\"leaderTransitions\":2}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-7dd585445f-xmpfm_25360831-2060-4ff1-a2f8-e22549f8b070 became leader'\nI0226 11:12:02.120405       1 leaderelection.go:251] successfully acquired lease openshift-machine-config-operator/machine-config\nI0226 11:12:02.660572       1 operator.go:246] Starting MachineConfigOperator\nW0226 11:13:20.448642       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 21028 (34928)\nW0226 11:13:20.455185       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 21028 (34929)\n
Feb 26 11:16:06.727 E ns/openshift-machine-api pod/machine-api-operator-6769f9fbfc-h5vmp node/ip-10-0-138-228.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 26 11:16:06.761 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-6d79787cc4-4vbg5 node/ip-10-0-138-228.ec2.internal container=kube-controller-manager-operator container exited with code 255 (Error): rets: [{csr-signer false}]\\nI0226 11:12:50.163360       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0226 11:12:50.163683       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\nI0226 11:13:00.171321       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0226 11:13:00.171906       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\nI0226 11:13:10.181719       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0226 11:13:10.182555       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\nE0226 11:13:19.423127       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=30178&timeout=7m53s&timeoutSeconds=473&watch=true: dial tcp [::1]:6443: connect: connection refused\\nE0226 11:13:19.423292       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=34661&timeout=7m55s&timeoutSeconds=475&watch=true: dial tcp [::1]:6443: connect: connection refused\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-138-1.ec2.internal pods/kube-controller-manager-ip-10-0-138-1.ec2.internal container=\"cluster-policy-controller-8\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-138-1.ec2.internal pods/kube-controller-manager-ip-10-0-138-1.ec2.internal container=\"kube-controller-manager-8\" is not ready"\nI0226 11:16:05.181464       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0226 11:16:05.181533       1 leaderelection.go:66] leaderelection lost\n
Feb 26 11:16:16.955 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-147-70.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-26T11:16:06.529Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-26T11:16:06.535Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-26T11:16:06.544Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-26T11:16:06.545Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-26T11:16:06.545Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-26T11:16:06.546Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-26T11:16:06.546Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-26T11:16:06.546Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-26T11:16:06.546Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-26T11:16:06.546Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-26T11:16:06.546Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-26T11:16:06.546Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-26T11:16:06.546Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-26T11:16:06.546Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-26T11:16:06.547Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-26T11:16:06.547Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-26
Feb 26 11:16:38.401 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:17:23.401 - 45s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 26 11:17:41.856 E clusteroperator/authentication changed Degraded to True: OAuthClientsDegradedError: OAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-browser-client)
Feb 26 11:17:56.730 E ns/openshift-cluster-node-tuning-operator pod/tuned-2xwmd node/ip-10-0-154-226.ec2.internal container=tuned container exited with code 143 (Error): ces ens3\n2020-02-26 11:14:42,564 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-26 11:14:42,578 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0226 11:14:58.347659    3503 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/packageserver-7ffff6745c-mtb9g) labels changed node wide: true\nI0226 11:15:01.675611    3503 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:15:01.677520    3503 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:15:01.797181    3503 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 11:15:59.695246    3503 openshift-tuned.go:550] Pod (openshift-apiserver-operator/openshift-apiserver-operator-5d5c58b48f-nrpml) labels changed node wide: true\nI0226 11:16:01.675582    3503 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:16:01.677888    3503 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:16:01.875298    3503 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 11:16:01.875445    3503 openshift-tuned.go:550] Pod (openshift-ingress-operator/ingress-operator-76459f9756-rhm47) labels changed node wide: true\nI0226 11:16:06.676114    3503 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:16:06.678270    3503 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:16:06.876471    3503 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 11:16:21.429123    3503 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0226 11:16:21.453270    3503 openshift-tuned.go:881] Pod event watch channel closed.\nI0226 11:16:21.453485    3503 openshift-tuned.go:883] Increasing resyncPeriod to 232\n
Feb 26 11:18:13.915 E ns/openshift-monitoring pod/node-exporter-m9cf7 node/ip-10-0-129-96.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:56:49Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:56:49Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 11:18:14.051 E ns/openshift-multus pod/multus-8jrf5 node/ip-10-0-129-96.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 26 11:18:14.128 E ns/openshift-machine-config-operator pod/machine-config-daemon-7d7gx node/ip-10-0-129-96.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 26 11:18:14.153 E ns/openshift-cluster-node-tuning-operator pod/tuned-vvmrs node/ip-10-0-129-96.ec2.internal container=tuned container exited with code 143 (Error): r/lib/tuned/ocp-pod-labels.cfg\nI0226 11:15:38.030481   87640 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:15:38.150113   87640 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 11:15:50.738347   87640 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-554/dp-657fc4b57d-pvvvb) labels changed node wide: true\nI0226 11:15:53.032332   87640 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:15:53.034911   87640 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:15:53.267148   87640 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 11:15:56.759205   87640 openshift-tuned.go:550] Pod (openshift-marketplace/certified-operators-597944b69c-gmmcx) labels changed node wide: true\nI0226 11:15:58.028858   87640 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:15:58.042610   87640 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:15:58.156655   87640 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 11:16:06.772884   87640 openshift-tuned.go:550] Pod (openshift-ingress/router-default-74c9845497-4przv) labels changed node wide: true\nI0226 11:16:08.028883   87640 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:16:08.030403   87640 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:16:08.146865   87640 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0226 11:16:21.406949   87640 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0226 11:16:21.411153   87640 openshift-tuned.go:881] Pod event watch channel closed.\nI0226 11:16:21.411200   87640 openshift-tuned.go:883] Increasing resyncPeriod to 124\n
Feb 26 11:18:16.708 E ns/openshift-multus pod/multus-8jrf5 node/ip-10-0-129-96.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 26 11:18:23.431 E ns/openshift-machine-config-operator pod/machine-config-daemon-7d7gx node/ip-10-0-129-96.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 26 11:18:38.822 E ns/openshift-monitoring pod/node-exporter-bj2mf node/ip-10-0-138-228.ec2.internal container=node-exporter container exited with code 143 (Error): 2-26T10:57:05Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-26T10:57:05Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 26 11:18:38.836 E ns/openshift-controller-manager pod/controller-manager-zfb6t node/ip-10-0-138-228.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 26 11:18:38.852 E ns/openshift-sdn pod/sdn-controller-5z52m node/ip-10-0-138-228.ec2.internal container=sdn-controller container exited with code 2 (Error): :[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-138-228 became leader'\nI0226 11:11:08.008914       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0226 11:11:08.014295       1 master.go:51] Initializing SDN master\nI0226 11:11:08.027763       1 network_controller.go:60] Started OpenShift Network Controller\nE0226 11:13:19.440700       1 reflector.go:280] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.NetNamespace: Get https://api-int.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/netnamespaces?allowWatchBookmarks=true&resourceVersion=23389&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp 10.0.148.39:6443: connect: connection refused\nE0226 11:13:19.444142       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: Get https://api-int.ci-op-0dc7lsdl-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=21046&timeout=7m0s&timeoutSeconds=420&watch=true: dial tcp 10.0.148.39:6443: connect: connection refused\nW0226 11:13:20.409364       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 23377 (34930)\nW0226 11:13:20.470938       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 23389 (34929)\nW0226 11:13:20.471842       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 21046 (34928)\n
Feb 26 11:18:38.886 E ns/openshift-multus pod/multus-admission-controller-dznwh node/ip-10-0-138-228.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 26 11:18:38.903 E ns/openshift-sdn pod/ovs-wkh6m node/ip-10-0-138-228.ec2.internal container=openvswitch container exited with code 143 (Error): eletes)\n2020-02-26T11:16:04.393Z|00040|jsonrpc|WARN|unix#821: send error: Broken pipe\n2020-02-26T11:16:04.393Z|00041|reconnect|WARN|unix#821: connection dropped (Broken pipe)\n2020-02-26T11:16:04.882Z|00250|connmgr|INFO|br0<->unix#962: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:16:04.961Z|00251|bridge|INFO|bridge br0: deleted interface veth15171c7d on port 30\n2020-02-26T11:16:05.026Z|00252|connmgr|INFO|br0<->unix#965: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:16:05.111Z|00253|connmgr|INFO|br0<->unix#968: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:16:05.197Z|00254|bridge|INFO|bridge br0: deleted interface veth45c2196c on port 18\n2020-02-26T11:16:05.757Z|00255|connmgr|INFO|br0<->unix#971: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:16:05.811Z|00256|connmgr|INFO|br0<->unix#974: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:16:05.148Z|00042|jsonrpc|WARN|unix#835: send error: Broken pipe\n2020-02-26T11:16:05.148Z|00043|reconnect|WARN|unix#835: connection dropped (Broken pipe)\n2020-02-26T11:16:05.163Z|00044|reconnect|WARN|unix#836: connection dropped (Broken pipe)\n2020-02-26T11:16:05.859Z|00257|bridge|INFO|bridge br0: deleted interface veth9a78486c on port 26\n2020-02-26T11:16:05.930Z|00258|connmgr|INFO|br0<->unix#977: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:16:06.022Z|00259|connmgr|INFO|br0<->unix#980: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:16:06.062Z|00260|bridge|INFO|bridge br0: deleted interface veth54110e8e on port 22\n2020-02-26T11:16:06.112Z|00261|connmgr|INFO|br0<->unix#983: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-26T11:16:06.195Z|00262|connmgr|INFO|br0<->unix#986: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-26T11:16:06.238Z|00263|bridge|INFO|bridge br0: deleted interface veth461a49b0 on port 12\n2020-02-26T11:16:05.940Z|00045|reconnect|WARN|unix#843: connection dropped (Broken pipe)\n2020-02-26T11:16:07.276Z|00046|reconnect|WARN|unix#856: connection dropped (Connection reset by peer)\nExiting ovs-vswitchd (12384).\nTerminated\n
Feb 26 11:18:38.919 E ns/openshift-multus pod/multus-cfwq2 node/ip-10-0-138-228.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 26 11:18:38.953 E ns/openshift-machine-config-operator pod/machine-config-daemon-qj9t9 node/ip-10-0-138-228.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 26 11:18:38.971 E ns/openshift-machine-config-operator pod/machine-config-server-r6jzb node/ip-10-0-138-228.ec2.internal container=machine-config-server container exited with code 2 (Error): I0226 11:09:43.871095       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-165-g1452cf64-dirty (1452cf640f3d96989ebbc88339c847b553c8fe3c)\nI0226 11:09:43.872925       1 api.go:51] Launching server on :22624\nI0226 11:09:43.872982       1 api.go:51] Launching server on :22623\n
Feb 26 11:18:38.987 E ns/openshift-cluster-node-tuning-operator pod/tuned-44fmw node/ip-10-0-138-228.ec2.internal container=tuned container exited with code 143 (Error): 38-228.ec2.internal) labels changed node wide: false\nI0226 11:16:02.781905    1127 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-7-ip-10-0-138-228.ec2.internal) labels changed node wide: false\nI0226 11:16:02.970405    1127 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-6-ip-10-0-138-228.ec2.internal) labels changed node wide: false\nI0226 11:16:03.390061    1127 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-7-ip-10-0-138-228.ec2.internal) labels changed node wide: false\nI0226 11:16:03.797814    1127 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-2-ip-10-0-138-228.ec2.internal) labels changed node wide: false\nI0226 11:16:04.164356    1127 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-6-ip-10-0-138-228.ec2.internal) labels changed node wide: false\nI0226 11:16:04.555892    1127 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-7-ip-10-0-138-228.ec2.internal) labels changed node wide: true\nI0226 11:16:08.462312    1127 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:16:08.468446    1127 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:16:08.639165    1127 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 11:16:15.859227    1127 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-7-ip-10-0-138-228.ec2.internal) labels changed node wide: true\nI0226 11:16:18.462188    1127 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0226 11:16:18.463771    1127 openshift-tuned.go:441] Getting recommended profile...\nI0226 11:16:18.624319    1127 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0226 11:16:21.051371    1127 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-138-228.ec2.internal) labels changed node wide: true\n
Feb 26 11:18:39.102 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-228.ec2.internal node/ip-10-0-138-228.ec2.internal container=kube-apiserver-7 container exited with code 1 (Error): 6 11:16:21.085074       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.085193       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.085250       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.085295       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.085328       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.085361       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.085419       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.085305       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.085500       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.085507       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.085584       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0226 11:16:21.157765       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}\nE0226 11:16:21.261708       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}\nI0226 11:16:21.312934       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-138-228.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0226 11:16:21.313331       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Feb 26 11:18:39.102 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-228.ec2.internal node/ip-10-0-138-228.ec2.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0226 10:52:40.908082       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 26 11:18:39.102 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-228.ec2.internal node/ip-10-0-138-228.ec2.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0226 11:12:45.724154       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:12:45.725825       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0226 11:12:45.933660       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:12:45.933993       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 26 11:18:39.117 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-228.ec2.internal node/ip-10-0-138-228.ec2.internal container=cluster-policy-controller-8 container exited with code 1 (Error): ] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?resourceVersion=37113&timeout=9m3s&timeoutSeconds=543&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.489947       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.RoleBinding: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=20275&timeout=8m51s&timeoutSeconds=531&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.489995       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1beta1.ReplicaSet: Get https://localhost:6443/apis/extensions/v1beta1/replicasets?resourceVersion=37951&timeout=5m8s&timeoutSeconds=308&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.490208       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?resourceVersion=32082&timeout=9m56s&timeoutSeconds=596&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.490253       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?resourceVersion=32083&timeout=8m17s&timeoutSeconds=497&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.490416       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?resourceVersion=37951&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.490546       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1beta1.Deployment: Get https://localhost:6443/apis/extensions/v1beta1/deployments?resourceVersion=37952&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Feb 26 11:18:39.117 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-228.ec2.internal node/ip-10-0-138-228.ec2.internal container=kube-controller-manager-cert-syncer-8 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:15:10.722063       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:15:10.722398       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:15:20.735122       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:15:20.735529       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:15:30.749457       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:15:30.749931       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:15:40.760672       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:15:40.761383       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:15:50.782511       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:15:50.782951       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:16:00.802700       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:16:00.803050       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:16:10.824847       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:16:10.825485       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0226 11:16:20.840878       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0226 11:16:20.841283       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Feb 26 11:18:39.117 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-228.ec2.internal node/ip-10-0-138-228.ec2.internal container=kube-controller-manager-8 container exited with code 2 (Error): 3/apis/operator.openshift.io/v1/openshiftcontrollermanagers?allowWatchBookmarks=true&resourceVersion=26108&timeout=5m33s&timeoutSeconds=333&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.569037       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=37125&timeout=6m24s&timeoutSeconds=384&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.569109       1 reflector.go:280] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.SecurityContextConstraints: Get https://localhost:6443/apis/security.openshift.io/v1/securitycontextconstraints?allowWatchBookmarks=true&resourceVersion=20417&timeout=9m37s&timeoutSeconds=577&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.569076       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/imageregistry.operator.openshift.io/v1/configs?allowWatchBookmarks=true&resourceVersion=36589&timeout=6m56s&timeoutSeconds=416&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.569166       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/tuned.openshift.io/v1/tuneds?allowWatchBookmarks=true&resourceVersion=32401&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0226 11:16:21.569258       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/prometheuses?allowWatchBookmarks=true&resourceVersion=34928&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Feb 26 11:18:39.133 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-138-228.ec2.internal node/ip-10-0-138-228.ec2.internal container=scheduler container exited with code 2 (Error): de resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0226 11:16:07.170082       1 scheduler.go:667] pod openshift-monitoring/alertmanager-main-0 is bound successfully on node "ip-10-0-147-70.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0226 11:16:08.975046       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-54f544f44f-nd82s: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0226 11:16:16.237000       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-54f544f44f-nd82s: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0226 11:16:18.830218       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-54f544f44f-nd82s: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0226 11:16:19.011999       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-74c948d74f-npgb9 is bound successfully on node "ip-10-0-138-1.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\n
Feb 26 11:18:44.188 E ns/openshift-multus pod/multus-cfwq2 node/ip-10-0-138-228.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 26 11:18:50.587 E ns/openshift-machine-config-operator pod/machine-config-daemon-qj9t9 node/ip-10-0-138-228.ec2.internal container=oauth-proxy container exited with code 1 (Error):