ResultSUCCESS
Tests 4 failed / 21 succeeded
Started2020-08-17 12:17
Elapsed1h24m
Work namespaceci-op-ym6wq58d
Refs release-4.3:ee962bde
297:dda2691a
pod977c8dd3-e083-11ea-b302-0a580a81078f
repoopenshift/cluster-authentication-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 36m14s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 13s of 31m35s (1%):

Aug 17 13:11:38.960 E ns/e2e-k8s-service-lb-available-7880 svc/service-test Service stopped responding to GET requests over new connections
Aug 17 13:11:39.960 - 2s    E ns/e2e-k8s-service-lb-available-7880 svc/service-test Service is not responding to GET requests over new connections
Aug 17 13:11:43.430 I ns/e2e-k8s-service-lb-available-7880 svc/service-test Service started responding to GET requests over new connections
Aug 17 13:12:00.960 E ns/e2e-k8s-service-lb-available-7880 svc/service-test Service stopped responding to GET requests over new connections
Aug 17 13:12:01.960 - 7s    E ns/e2e-k8s-service-lb-available-7880 svc/service-test Service is not responding to GET requests over new connections
Aug 17 13:12:10.337 I ns/e2e-k8s-service-lb-available-7880 svc/service-test Service started responding to GET requests over new connections
Aug 17 13:13:41.960 E ns/e2e-k8s-service-lb-available-7880 svc/service-test Service stopped responding to GET requests on reused connections
Aug 17 13:13:42.118 I ns/e2e-k8s-service-lb-available-7880 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1597671229.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 34m43s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 4m13s of 34m43s (12%):

Aug 17 13:10:46.330 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 17 13:10:46.594 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 17 13:11:35.330 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 17 13:11:35.888 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 17 13:11:58.329 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 17 13:11:58.330 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 17 13:11:58.330 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 17 13:11:59.329 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 17 13:11:59.329 - 33s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 17 13:11:59.329 - 33s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 17 13:12:08.599 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 17 13:12:19.329 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 17 13:12:19.597 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 17 13:12:33.609 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 17 13:12:33.609 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 17 13:12:35.329 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 17 13:12:36.329 - 22s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 17 13:12:48.260 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 17 13:12:48.329 - 5s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 17 13:12:52.330 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Aug 17 13:12:53.329 - 3s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Aug 17 13:12:53.547 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 17 13:12:57.628 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Aug 17 13:12:58.721 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 17 13:21:15.329 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 17 13:21:15.329 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 17 13:21:15.329 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 17 13:21:15.588 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 17 13:21:15.621 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 17 13:21:15.833 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 17 13:24:11.329 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 17 13:24:11.329 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 17 13:24:12.329 - 32s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 17 13:24:12.329 - 32s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 17 13:24:12.329 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Aug 17 13:24:12.329 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 17 13:24:13.329 - 31s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Aug 17 13:24:13.329 - 31s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 17 13:24:45.438 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 17 13:24:45.440 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Aug 17 13:24:45.492 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 17 13:24:45.496 I ns/openshift-console route/console Route started responding to GET requests on reused connections
				from junit_upgrade_1597671229.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 34m43s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1m1s of 34m43s (3%):

Aug 17 13:11:40.185 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Aug 17 13:11:40.258 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:22:01.185 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Aug 17 13:22:02.185 - 13s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:22:16.258 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:22:32.185 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Aug 17 13:22:32.254 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:24:52.457 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:24:53.185 - 9s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:25:02.824 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:25:18.185 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Aug 17 13:25:19.185 - 13s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:25:33.249 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:28:05.742 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:28:06.185 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:28:06.569 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:28:23.185 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Aug 17 13:28:23.252 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:28:41.185 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Aug 17 13:28:41.250 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:28:57.185 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Aug 17 13:28:58.185 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:29:00.003 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:29:03.008 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:29:03.076 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:29:06.080 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:29:06.147 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:29:09.152 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:29:09.185 - 8s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:29:18.433 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:29:21.441 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:29:22.185 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:29:24.580 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:29:30.658 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:29:30.744 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:29:33.729 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:29:33.854 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:29:39.873 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:29:40.003 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:29:42.944 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:29:43.185 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:29:49.159 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:29:52.161 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:29:52.185 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:29:52.235 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:29:55.232 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:29:55.303 I openshift-apiserver OpenShift API started responding to GET requests
Aug 17 13:30:07.521 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 17 13:30:08.185 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:30:13.729 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1597671229.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 36m18s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
206 error level events were detected during this test run:

Aug 17 12:57:56.207 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=cluster-policy-controller-8 container exited with code 255 (Error): 1/imagestreams?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0817 12:57:55.476811       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.DaemonSet: Get https://localhost:6443/apis/apps/v1/daemonsets?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0817 12:57:55.477972       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0817 12:57:55.478931       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Lease: Get https://localhost:6443/apis/coordination.k8s.io/v1/leases?limit=500&resourceVersion=0: dial tcp [::1]:6443: connect: connection refused\nE0817 12:57:55.944431       1 event.go:247] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-131-26 stopped leading'\nI0817 12:57:55.944562       1 leaderelection.go:263] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0817 12:57:55.944616       1 policy_controller.go:94] leaderelection lost\nI0817 12:57:55.950627       1 resource_quota_controller.go:295] Shutting down resource quota controller\n
Aug 17 13:01:02.596 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 508)
Aug 17 13:01:15.863 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-585f576bc5-7825f node/ip-10-0-131-26.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): Available: 3 nodes are active; 1 nodes are at revision 4; 2 nodes are at revision 6" to "Available: 3 nodes are active; 3 nodes are at revision 6"\nI0817 12:58:17.001964       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"da7d131e-d5fc-4098-b819-11baec342685", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-6 -n openshift-kube-apiserver: cause by changes in data.status\nI0817 12:58:17.818288       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"da7d131e-d5fc-4098-b819-11baec342685", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-131-26.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-131-26.us-west-1.compute.internal container=\"kube-apiserver-6\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nI0817 12:58:20.605635       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"da7d131e-d5fc-4098-b819-11baec342685", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ip-10-0-131-26.us-west-1.compute.internal -n openshift-kube-apiserver because it was missing\nW0817 13:01:12.936427       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19921 (20062)\nI0817 13:01:14.764540       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0817 13:01:14.764633       1 leaderelection.go:66] leaderelection lost\n
Aug 17 13:02:36.127 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-c8fbc9b96-6v8pd node/ip-10-0-131-26.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): e.internal container=\"cluster-policy-controller-8\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-131-26.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-131-26.us-west-1.compute.internal container=\"cluster-policy-controller-8\" is waiting: \"CrashLoopBackOff\" - \"back-off 40s restarting failed container=cluster-policy-controller-8 pod=kube-controller-manager-ip-10-0-131-26.us-west-1.compute.internal_openshift-kube-controller-manager(0c41d2f9f97400c85beb920f5ffc7c71)\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-131-26.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-131-26.us-west-1.compute.internal container=\"cluster-policy-controller-8\" is not ready"\nI0817 12:59:00.930785       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"54555485-e80f-46df-9ca4-bf97e8447b39", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-131-26.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-131-26.us-west-1.compute.internal container=\"cluster-policy-controller-8\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nW0817 13:01:12.863939       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19921 (20056)\nI0817 13:02:35.512555       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0817 13:02:35.512722       1 leaderelection.go:66] leaderelection lost\nI0817 13:02:35.525193       1 backing_resource_controller.go:148] Shutting down BackingResourceController\nI0817 13:02:35.525539       1 secure_serving.go:167] Stopped listening on [::]:8443\n
Aug 17 13:02:42.144 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-679b5fb64c-r45mk node/ip-10-0-131-26.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): odes are ready" to "StaticPodsDegraded: nodes/ip-10-0-131-26.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-131-26.us-west-1.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0817 12:58:15.915888       1 status_controller.go:175] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2020-08-17T12:46:25Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-08-17T12:48:46Z","message":"Progressing: 3 nodes are at revision 5","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-08-17T12:43:00Z","message":"Available: 3 nodes are active; 3 nodes are at revision 5","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-08-17T12:40:28Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0817 12:58:15.927942       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"7074de23-e248-4d82-8733-da4d0758862f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-131-26.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-131-26.us-west-1.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nW0817 13:01:12.938120       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19921 (20062)\nI0817 13:02:41.155183       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0817 13:02:41.155380       1 leaderelection.go:66] leaderelection lost\n
Aug 17 13:03:09.894 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 03:09.210328       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0817 13:03:09.210336       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0817 13:03:09.210344       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0817 13:03:09.210352       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0817 13:03:09.210361       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0817 13:03:09.210370       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0817 13:03:09.210378       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0817 13:03:09.210386       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0817 13:03:09.210395       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0817 13:03:09.210404       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0817 13:03:09.210418       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0817 13:03:09.210430       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0817 13:03:09.210440       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0817 13:03:09.210451       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0817 13:03:09.210493       1 server.go:692] external host was not specified, using 10.0.129.220\nI0817 13:03:09.210763       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0817 13:03:09.211053       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 17 13:03:33.048 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 03:31.959154       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0817 13:03:31.959164       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0817 13:03:31.959173       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0817 13:03:31.959183       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0817 13:03:31.959194       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0817 13:03:31.959204       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0817 13:03:31.959214       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0817 13:03:31.959223       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0817 13:03:31.959232       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0817 13:03:31.959243       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0817 13:03:31.959257       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0817 13:03:31.959269       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0817 13:03:31.959281       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0817 13:03:31.959292       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0817 13:03:31.959338       1 server.go:692] external host was not specified, using 10.0.129.220\nI0817 13:03:31.959485       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0817 13:03:31.959821       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 17 13:04:03.090 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 04:02.864231       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0817 13:04:02.864239       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0817 13:04:02.864248       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0817 13:04:02.864267       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0817 13:04:02.864284       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0817 13:04:02.864301       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0817 13:04:02.864323       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0817 13:04:02.864339       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0817 13:04:02.864352       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0817 13:04:02.864361       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0817 13:04:02.864379       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0817 13:04:02.864403       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0817 13:04:02.864414       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0817 13:04:02.864424       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0817 13:04:02.864463       1 server.go:692] external host was not specified, using 10.0.129.220\nI0817 13:04:02.864651       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0817 13:04:02.864992       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 17 13:04:26.491 E ns/openshift-machine-api pod/machine-api-operator-59d5d69b66-gw854 node/ip-10-0-156-63.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Aug 17 13:04:37.283 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=cluster-policy-controller-9 container exited with code 255 (Error): I0817 13:04:36.295587       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0817 13:04:36.297207       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0817 13:04:36.297942       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\nI0817 13:04:36.297248       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Aug 17 13:04:59.373 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=cluster-policy-controller-9 container exited with code 255 (Error): I0817 13:04:58.839989       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0817 13:04:58.842133       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0817 13:04:58.842263       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0817 13:04:58.842670       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 17 13:05:29.743 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :05:29.145556       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0817 13:05:29.145565       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0817 13:05:29.145573       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0817 13:05:29.145583       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0817 13:05:29.145592       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0817 13:05:29.145600       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0817 13:05:29.145609       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0817 13:05:29.145617       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0817 13:05:29.145625       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0817 13:05:29.145634       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0817 13:05:29.145666       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0817 13:05:29.145678       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0817 13:05:29.145689       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0817 13:05:29.145698       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0817 13:05:29.145741       1 server.go:692] external host was not specified, using 10.0.131.26\nI0817 13:05:29.145947       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0817 13:05:29.146506       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 17 13:05:47.868 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :05:47.529009       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0817 13:05:47.529015       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0817 13:05:47.529020       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0817 13:05:47.529026       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0817 13:05:47.529031       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0817 13:05:47.529037       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0817 13:05:47.529042       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0817 13:05:47.529048       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0817 13:05:47.529053       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0817 13:05:47.529059       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0817 13:05:47.529070       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0817 13:05:47.529077       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0817 13:05:47.529083       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0817 13:05:47.529090       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0817 13:05:47.529120       1 server.go:692] external host was not specified, using 10.0.131.26\nI0817 13:05:47.529251       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0817 13:05:47.529484       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 17 13:06:01.982 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=cluster-policy-controller-9 container exited with code 255 (Error): I0817 13:06:01.122996       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0817 13:06:01.144250       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0817 13:06:01.144896       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0817 13:06:01.147263       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 17 13:06:09.978 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :06:09.505725       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0817 13:06:09.505734       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0817 13:06:09.505742       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0817 13:06:09.505751       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0817 13:06:09.505763       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0817 13:06:09.505773       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0817 13:06:09.505781       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0817 13:06:09.505789       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0817 13:06:09.505797       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0817 13:06:09.505806       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0817 13:06:09.505819       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0817 13:06:09.505832       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0817 13:06:09.505843       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0817 13:06:09.505853       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0817 13:06:09.505900       1 server.go:692] external host was not specified, using 10.0.131.26\nI0817 13:06:09.506088       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0817 13:06:09.506878       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 17 13:06:21.027 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=cluster-policy-controller-9 container exited with code 255 (Error): I0817 13:06:20.457416       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0817 13:06:20.459558       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0817 13:06:20.459627       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0817 13:06:20.460428       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 17 13:06:31.851 E ns/openshift-machine-api pod/machine-api-controllers-58dcf65564-zzp7k node/ip-10-0-129-220.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Aug 17 13:07:05.918 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update kubestorageversionmigrator "cluster" (128 of 508)
Aug 17 13:07:34.303 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=cluster-policy-controller-9 container exited with code 255 (Error): I0817 13:07:33.392839       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0817 13:07:33.395638       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0817 13:07:33.396302       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0817 13:07:33.397239       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 17 13:07:44.381 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :07:43.772892       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0817 13:07:43.772901       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0817 13:07:43.772909       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0817 13:07:43.772918       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0817 13:07:43.772926       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0817 13:07:43.772934       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0817 13:07:43.772942       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0817 13:07:43.772950       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0817 13:07:43.772958       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0817 13:07:43.772968       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0817 13:07:43.772981       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0817 13:07:43.773004       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0817 13:07:43.773016       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0817 13:07:43.773029       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0817 13:07:43.773071       1 server.go:692] external host was not specified, using 10.0.156.63\nI0817 13:07:43.773262       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0817 13:07:43.773614       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 17 13:07:52.456 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=cluster-policy-controller-9 container exited with code 255 (Error): I0817 13:07:51.877996       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0817 13:07:51.880889       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0817 13:07:51.882362       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0817 13:07:51.883880       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 17 13:08:08.556 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :08:07.566959       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0817 13:08:07.566968       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0817 13:08:07.566976       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0817 13:08:07.566986       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0817 13:08:07.566995       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0817 13:08:07.567003       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0817 13:08:07.567011       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0817 13:08:07.567020       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0817 13:08:07.567028       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0817 13:08:07.567038       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0817 13:08:07.567051       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0817 13:08:07.567064       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0817 13:08:07.567076       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0817 13:08:07.567087       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0817 13:08:07.567128       1 server.go:692] external host was not specified, using 10.0.156.63\nI0817 13:08:07.567299       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0817 13:08:07.567621       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 17 13:08:13.073 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-756bxhbf2 node/ip-10-0-129-220.us-west-1.compute.internal container=operator container exited with code 255 (Error): go/informers/factory.go:134: forcing resync\nI0817 13:07:14.986334       1 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync\nI0817 13:07:15.214133       1 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync\nI0817 13:07:16.564977       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 68 items received\nW0817 13:07:16.567746       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 22646 (22651)\nI0817 13:07:16.629452       1 reflector.go:241] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: forcing resync\nI0817 13:07:17.568062       1 reflector.go:158] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0817 13:07:19.304084       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 0 items received\nW0817 13:07:19.323124       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 22651 (22668)\nI0817 13:07:20.323415       1 reflector.go:158] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0817 13:07:38.567251       1 httplog.go:90] GET /metrics: (7.207172ms) 200 [Prometheus/2.14.0 10.128.2.21:43868]\nI0817 13:07:43.729515       1 httplog.go:90] GET /metrics: (1.388675ms) 200 [Prometheus/2.14.0 10.131.0.14:35860]\nI0817 13:08:08.568249       1 httplog.go:90] GET /metrics: (10.015855ms) 200 [Prometheus/2.14.0 10.128.2.21:43868]\nI0817 13:08:11.672689       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0817 13:08:11.673317       1 builder.go:217] server exited\n
Aug 17 13:08:18.801 E ns/openshift-service-ca-operator pod/service-ca-operator-77ff8d55ff-tctk4 node/ip-10-0-131-26.us-west-1.compute.internal container=operator container exited with code 255 (Error): 
Aug 17 13:08:20.860 E ns/openshift-controller-manager pod/controller-manager-5c49c node/ip-10-0-129-220.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Aug 17 13:08:28.996 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :08:28.553337       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0817 13:08:28.553346       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0817 13:08:28.553354       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0817 13:08:28.553363       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0817 13:08:28.553372       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0817 13:08:28.553381       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0817 13:08:28.553390       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0817 13:08:28.553398       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0817 13:08:28.553407       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0817 13:08:28.553416       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0817 13:08:28.553431       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0817 13:08:28.553447       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0817 13:08:28.553457       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0817 13:08:28.553468       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0817 13:08:28.553541       1 server.go:692] external host was not specified, using 10.0.156.63\nI0817 13:08:28.553728       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0817 13:08:28.554050       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 17 13:08:39.573 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-62.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/17 12:53:33 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 17 13:08:39.573 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-62.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/17 12:53:34 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/17 12:53:34 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 12:53:34 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 12:53:34 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/17 12:53:34 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 12:53:34 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/17 12:53:34 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 12:53:34 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/17 12:53:34 http.go:106: HTTPS: listening on [::]:9091\n2020/08/17 12:57:25 oauthproxy.go:774: basicauth: 10.128.2.14:40138 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/17 13:01:55 oauthproxy.go:774: basicauth: 10.128.2.14:45364 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/17 13:03:26 oauthproxy.go:774: basicauth: 10.128.0.22:36900 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/17 13:06:25 oauthproxy.go:774: basicauth: 10.128.2.14:51028 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/17 13:08:14 oauthproxy.go:774: basicauth: 10.129.0.62:43246 Authorization header does not start with 'Basic', skipping basic authentication\n
Aug 17 13:08:39.573 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-62.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-17T12:53:33.794717882Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-08-17T12:53:33.794827483Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-17T12:53:33.79624779Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-17T12:53:38.90242403Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 17 13:08:40.581 E ns/openshift-monitoring pod/openshift-state-metrics-7c796845c4-j9jvh node/ip-10-0-151-62.us-west-1.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Aug 17 13:08:54.822 E ns/openshift-ingress pod/router-default-7459cfb5fd-tffl9 node/ip-10-0-151-62.us-west-1.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:08:05.247142       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:08:10.245462       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:08:15.250802       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:08:20.268316       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:08:25.278839       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:08:30.286437       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:08:35.250754       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:08:40.251660       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:08:47.469243       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:08:52.479910       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Aug 17 13:08:55.871 E ns/openshift-monitoring pod/prometheus-adapter-6d96c96495-9w9z6 node/ip-10-0-151-62.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0817 12:50:53.500666       1 adapter.go:93] successfully using in-cluster auth\nI0817 12:50:54.481304       1 secure_serving.go:116] Serving securely on [::]:6443\n
Aug 17 13:09:03.835 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-62.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-17T13:08:57.047Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-17T13:08:57.052Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-17T13:08:57.053Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-17T13:08:57.054Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-17T13:08:57.054Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-17T13:08:57.054Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-17T13:08:57.054Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-17T13:08:57.054Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-17T13:08:57.054Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-17T13:08:57.054Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-17T13:08:57.054Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-17T13:08:57.054Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-17T13:08:57.054Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-17T13:08:57.054Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-17T13:08:57.055Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-17T13:08:57.055Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-17
Aug 17 13:09:08.176 E ns/openshift-console-operator pod/console-operator-77644886f9-69gb6 node/ip-10-0-156-63.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): y ended with: too old resource version: 17664 (22358)\nW0817 13:08:50.822240       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 20964 (21406)\nW0817 13:08:50.822288       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 21789 (24166)\nW0817 13:08:50.826101       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 21293 (24166)\nW0817 13:08:50.826295       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 23456 (24166)\nW0817 13:08:50.849883       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 17485 (22362)\nW0817 13:08:50.875672       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20734 (24166)\nW0817 13:08:50.875807       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 18315 (22362)\nW0817 13:08:50.884964       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 18838 (21406)\nW0817 13:08:50.982966       1 reflector.go:299] github.com/openshift/client-go/console/informers/externalversions/factory.go:101: watch of *v1.ConsoleCLIDownload ended with: too old resource version: 18321 (25341)\nW0817 13:08:51.058930       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 18315 (25342)\nI0817 13:09:07.384401       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0817 13:09:07.384459       1 leaderelection.go:66] leaderelection lost\n
Aug 17 13:09:12.870 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-62.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/17 12:52:04 Watching directory: "/etc/alertmanager/config"\n
Aug 17 13:09:12.870 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-62.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/17 12:52:04 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 12:52:04 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 12:52:04 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 12:52:04 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/17 12:52:04 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 12:52:04 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 12:52:04 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 12:52:04 http.go:106: HTTPS: listening on [::]:9095\n
Aug 17 13:09:12.903 E ns/openshift-marketplace pod/redhat-operators-d55d6844f-ddxcr node/ip-10-0-151-62.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Aug 17 13:09:20.655 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-136-189.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/17 12:52:10 Watching directory: "/etc/alertmanager/config"\n
Aug 17 13:09:20.655 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-136-189.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/17 12:52:10 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 12:52:10 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 12:52:10 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 12:52:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/17 12:52:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 12:52:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 12:52:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 12:52:10 http.go:106: HTTPS: listening on [::]:9095\n
Aug 17 13:09:25.097 E ns/openshift-monitoring pod/node-exporter-x8gbj node/ip-10-0-131-26.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-17T12:44:56Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-17T12:44:56Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 17 13:09:30.677 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-189.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-17T13:09:26.189Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-17T13:09:26.192Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-17T13:09:26.193Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-17T13:09:26.194Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-17T13:09:26.194Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-17T13:09:26.195Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-17T13:09:26.195Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-17
Aug 17 13:09:40.341 E ns/openshift-service-ca pod/service-serving-cert-signer-7945955d8c-xjzml node/ip-10-0-156-63.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Aug 17 13:09:56.212 E ns/openshift-controller-manager pod/controller-manager-bzmlt node/ip-10-0-131-26.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Aug 17 13:10:07.014 E ns/openshift-marketplace pod/community-operators-69d7c484d6-9r8nd node/ip-10-0-151-62.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Aug 17 13:10:36.553 E ns/openshift-console pod/console-594fcdb67c-lgx88 node/ip-10-0-156-63.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/08/17 12:53:39 cmd/main: cookies are secure!\n2020/08/17 12:53:39 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:53:49 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:53:59 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:54:09 cmd/main: Binding to [::]:8443...\n2020/08/17 12:54:09 cmd/main: using TLS\n
Aug 17 13:10:47.529 E ns/openshift-console pod/console-594fcdb67c-tp5ff node/ip-10-0-129-220.us-west-1.compute.internal container=console container exited with code 2 (Error): und\n2020/08/17 12:51:54 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:52:04 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:52:14 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:52:24 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:52:34 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:52:44 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:52:54 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:53:04 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:53:14 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:53:24 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/17 12:53:34 cmd/main: Binding to [::]:8443...\n2020/08/17 12:53:34 cmd/main: using TLS\n
Aug 17 13:11:32.643 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:11:34.553 E ns/openshift-sdn pod/sdn-controller-z5grw node/ip-10-0-131-26.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): 14 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-496"\nI0817 12:57:35.030490       1 vnids.go:115] Allocated netid 12500565 for namespace "e2e-k8s-service-lb-available-7880"\nI0817 12:57:35.049097       1 vnids.go:115] Allocated netid 11862821 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-1413"\nI0817 12:57:35.072454       1 vnids.go:115] Allocated netid 15860695 for namespace "e2e-k8s-sig-apps-job-upgrade-8973"\nE0817 13:03:58.563247       1 leaderelection.go:365] Failed to update lock: Put https://api-int.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: read tcp 10.0.131.26:35786->10.0.159.34:6443: read: connection reset by peer\nW0817 13:03:58.575953       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 18277 (20900)\nW0817 13:03:58.576996       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 18508 (20899)\nE0817 13:08:49.010891       1 leaderelection.go:365] Failed to update lock: Put https://api-int.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: read tcp 10.0.131.26:60256->10.0.159.34:6443: read: connection reset by peer\nW0817 13:08:49.023309       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 19293 (22530)\nW0817 13:08:49.099061       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 20900 (25277)\nW0817 13:08:49.156996       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 20899 (25285)\n
Aug 17 13:11:36.799 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-5cc76df8d7-zkvl4 node/ip-10-0-156-63.us-west-1.compute.internal container=manager container exited with code 1 (Error): ft-cloud-credential-operator/openshift-network\ntime="2020-08-17T13:08:18Z" level=debug msg="ignoring cr as it is for a different cloud" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-08-17T13:08:18Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-08-17T13:08:18Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-08-17T13:08:18Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-08-17T13:08:18Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-08-17T13:08:18Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-08-17T13:08:18Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-08-17T13:08:18Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-08-17T13:08:18Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-08-17T13:08:20Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\ntime="2020-08-17T13:10:18Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-08-17T13:10:18Z" level=info msg="reconcile complete" controller=metrics elapsed=1.435225ms\ntime="2020-08-17T13:11:36Z" level=error msg="leader election lostunable to run the manager"\n
Aug 17 13:11:49.739 E ns/openshift-sdn pod/sdn-controller-4xbt5 node/ip-10-0-129-220.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0817 12:39:54.027322       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 17 13:11:52.857 E ns/openshift-multus pod/multus-rdrhl node/ip-10-0-156-63.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 17 13:11:59.068 E ns/openshift-sdn pod/sdn-4pxrk node/ip-10-0-136-189.us-west-1.compute.internal container=sdn container exited with code 255 (Error): hift-cloud-credential-operator/controller-manager-service:\nI0817 13:11:36.905726    1941 proxier.go:371] userspace proxy: processing 0 service events\nI0817 13:11:36.905748    1941 proxier.go:350] userspace syncProxyRules took 27.377178ms\nI0817 13:11:37.022729    1941 proxier.go:371] userspace proxy: processing 0 service events\nI0817 13:11:37.022753    1941 proxier.go:350] userspace syncProxyRules took 28.102731ms\nI0817 13:11:37.781225    1941 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/controller-manager-service: to [10.130.0.55:443]\nI0817 13:11:37.781268    1941 roundrobin.go:218] Delete endpoint 10.130.0.55:443 for service "openshift-cloud-credential-operator/controller-manager-service:"\nI0817 13:11:37.781330    1941 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics to [10.130.0.55:2112]\nI0817 13:11:37.781345    1941 roundrobin.go:218] Delete endpoint 10.130.0.55:2112 for service "openshift-cloud-credential-operator/cco-metrics:cco-metrics"\nI0817 13:11:37.899664    1941 proxier.go:371] userspace proxy: processing 0 service events\nI0817 13:11:37.899688    1941 proxier.go:350] userspace syncProxyRules took 29.459752ms\nI0817 13:11:38.032948    1941 proxier.go:371] userspace proxy: processing 0 service events\nI0817 13:11:38.032976    1941 proxier.go:350] userspace syncProxyRules took 37.539668ms\nI0817 13:11:56.173411    1941 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-7880/service-test: to [10.128.2.24:80]\nI0817 13:11:56.173442    1941 roundrobin.go:218] Delete endpoint 10.131.0.21:80 for service "e2e-k8s-service-lb-available-7880/service-test:"\nI0817 13:11:56.287984    1941 proxier.go:371] userspace proxy: processing 0 service events\nI0817 13:11:56.288010    1941 proxier.go:350] userspace syncProxyRules took 27.377998ms\nF0817 13:11:58.622587    1941 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Aug 17 13:12:01.942 E ns/openshift-service-ca pod/apiservice-cabundle-injector-6bf8477c9d-85726 node/ip-10-0-156-63.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Aug 17 13:12:35.763 E ns/openshift-multus pod/multus-admission-controller-jngtt node/ip-10-0-131-26.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Aug 17 13:12:46.196 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Aug 17 13:12:47.643 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:13:17.643 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:13:32.499 E ns/openshift-sdn pod/ovs-d6wg7 node/ip-10-0-151-62.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): ace vethc52153e3 on port 38\n2020-08-17T13:09:32.102Z|00215|connmgr|INFO|br0<->unix#1212: 5 flow_mods in the last 0 s (5 adds)\n2020-08-17T13:09:32.141Z|00216|connmgr|INFO|br0<->unix#1215: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:09:37.905Z|00217|connmgr|INFO|br0<->unix#1221: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:09:37.937Z|00218|connmgr|INFO|br0<->unix#1224: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:09:37.970Z|00219|bridge|INFO|bridge br0: deleted interface veth6fadd49c on port 6\n2020-08-17T13:10:06.355Z|00220|connmgr|INFO|br0<->unix#1249: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:10:06.382Z|00221|connmgr|INFO|br0<->unix#1252: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:10:06.405Z|00222|bridge|INFO|bridge br0: deleted interface veth0c49a0fe on port 7\n2020-08-17T13:11:55.003Z|00223|connmgr|INFO|br0<->unix#1330: 2 flow_mods in the last 0 s (2 adds)\n2020-08-17T13:11:55.058Z|00224|connmgr|INFO|br0<->unix#1334: 1 flow_mods in the last 0 s (1 adds)\n2020-08-17T13:11:55.110Z|00225|connmgr|INFO|br0<->unix#1342: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-17T13:11:55.281Z|00226|connmgr|INFO|br0<->unix#1345: 3 flow_mods in the last 0 s (3 adds)\n2020-08-17T13:11:55.312Z|00227|connmgr|INFO|br0<->unix#1350: 1 flow_mods in the last 0 s (1 adds)\n2020-08-17T13:11:55.344Z|00228|connmgr|INFO|br0<->unix#1353: 3 flow_mods in the last 0 s (3 adds)\n2020-08-17T13:11:55.368Z|00229|connmgr|INFO|br0<->unix#1356: 1 flow_mods in the last 0 s (1 adds)\n2020-08-17T13:11:55.396Z|00230|connmgr|INFO|br0<->unix#1359: 3 flow_mods in the last 0 s (3 adds)\n2020-08-17T13:11:55.424Z|00231|connmgr|INFO|br0<->unix#1362: 1 flow_mods in the last 0 s (1 adds)\n2020-08-17T13:11:55.456Z|00232|connmgr|INFO|br0<->unix#1365: 3 flow_mods in the last 0 s (3 adds)\n2020-08-17T13:11:55.486Z|00233|connmgr|INFO|br0<->unix#1368: 1 flow_mods in the last 0 s (1 adds)\n2020-08-17 13:13:31 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 17 13:13:35.525 E ns/openshift-sdn pod/sdn-hwdgv node/ip-10-0-151-62.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 041 roundrobin.go:218] Delete endpoint 10.131.0.25:1936 for service "openshift-ingress/router-internal-default:metrics"\nI0817 13:13:05.091580   94041 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-ingress/router-internal-default:https to [10.128.2.28:443 10.131.0.25:443]\nI0817 13:13:05.091591   94041 roundrobin.go:218] Delete endpoint 10.131.0.25:443 for service "openshift-ingress/router-internal-default:https"\nI0817 13:13:05.092518   94041 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-ingress/router-default:http to [10.128.2.28:80 10.131.0.25:80]\nI0817 13:13:05.092573   94041 roundrobin.go:218] Delete endpoint 10.131.0.25:80 for service "openshift-ingress/router-default:http"\nI0817 13:13:05.092589   94041 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-ingress/router-default:https to [10.128.2.28:443 10.131.0.25:443]\nI0817 13:13:05.092601   94041 roundrobin.go:218] Delete endpoint 10.131.0.25:443 for service "openshift-ingress/router-default:https"\nI0817 13:13:05.233891   94041 proxier.go:371] userspace proxy: processing 0 service events\nI0817 13:13:05.233918   94041 proxier.go:350] userspace syncProxyRules took 35.098852ms\nI0817 13:13:05.370643   94041 proxier.go:371] userspace proxy: processing 0 service events\nI0817 13:13:05.370668   94041 proxier.go:350] userspace syncProxyRules took 26.931225ms\nI0817 13:13:28.119861   94041 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.64:6443 10.129.0.72:6443 10.130.0.65:6443]\nI0817 13:13:28.119899   94041 roundrobin.go:218] Delete endpoint 10.128.0.64:6443 for service "openshift-multus/multus-admission-controller:"\nI0817 13:13:28.246239   94041 proxier.go:371] userspace proxy: processing 0 service events\nI0817 13:13:28.246262   94041 proxier.go:350] userspace syncProxyRules took 27.833548ms\nF0817 13:13:34.490540   94041 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 17 13:15:17.414 E ns/openshift-multus pod/multus-lg2mv node/ip-10-0-131-26.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 17 13:15:48.472 E ns/openshift-machine-config-operator pod/machine-config-operator-d4f5d7bb8-z2lr2 node/ip-10-0-131-26.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): e ended with: too old resource version: 18324 (22673)\nW0817 13:08:50.873524       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 17758 (22532)\nW0817 13:08:50.873742       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 18322 (23215)\nW0817 13:08:50.873890       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 18400 (23068)\nW0817 13:08:50.942615       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 21095 (24169)\nW0817 13:08:50.976180       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 18311 (22587)\nW0817 13:08:51.010965       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 17852 (23955)\nW0817 13:08:51.011257       1 reflector.go:299] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.CustomResourceDefinition ended with: too old resource version: 17747 (22530)\nW0817 13:08:51.011525       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 18413 (23215)\nW0817 13:08:51.020836       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 18676 (22530)\nW0817 13:08:51.021643       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 18326 (22679)\n
Aug 17 13:17:43.864 E ns/openshift-machine-config-operator pod/machine-config-daemon-z7lnm node/ip-10-0-131-26.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 17 13:17:50.546 E ns/openshift-machine-config-operator pod/machine-config-daemon-dfwr6 node/ip-10-0-136-189.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 17 13:17:56.187 E ns/openshift-machine-config-operator pod/machine-config-daemon-r8g2l node/ip-10-0-151-62.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 17 13:18:06.135 E ns/openshift-machine-config-operator pod/machine-config-daemon-ljw8z node/ip-10-0-129-220.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 17 13:18:15.153 E ns/openshift-machine-config-operator pod/machine-config-daemon-bp9zf node/ip-10-0-156-63.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 17 13:18:31.219 E ns/openshift-machine-config-operator pod/machine-config-controller-5896864fd7-fz9hn node/ip-10-0-156-63.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): 01: watch of *v1.Image ended with: too old resource version: 18322 (22672)\nW0817 13:08:50.599393       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ContainerRuntimeConfig ended with: too old resource version: 18409 (23195)\nW0817 13:08:50.601470       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 18320 (22672)\nW0817 13:08:50.606754       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 20964 (22530)\nW0817 13:08:50.610579       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.KubeletConfig ended with: too old resource version: 18415 (23193)\nW0817 13:08:50.615669       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 18400 (23068)\nW0817 13:08:50.615913       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1alpha1.ImageContentSourcePolicy ended with: too old resource version: 18321 (23161)\nI0817 13:08:51.659222       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool master\nI0817 13:08:51.694774       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool worker\nW0817 13:13:46.867579       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 28027 (28357)\nW0817 13:13:49.763696       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 28357 (28376)\n
Aug 17 13:20:54.545 E ns/openshift-machine-config-operator pod/machine-config-server-nxhqk node/ip-10-0-131-26.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0817 12:41:03.359130       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0817 12:41:03.360152       1 api.go:56] Launching server on :22624\nI0817 12:41:03.360215       1 api.go:56] Launching server on :22623\nI0817 12:46:28.178289       1 api.go:102] Pool worker requested by 10.0.159.34:14922\n
Aug 17 13:21:06.130 E ns/openshift-monitoring pod/telemeter-client-55479588c4-j5r2r node/ip-10-0-136-189.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Aug 17 13:21:06.130 E ns/openshift-monitoring pod/telemeter-client-55479588c4-j5r2r node/ip-10-0-136-189.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Aug 17 13:21:06.271 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-136-189.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/17 13:09:07 Watching directory: "/etc/alertmanager/config"\n
Aug 17 13:21:06.271 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-136-189.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/17 13:09:10 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 13:09:10 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 13:09:10 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 13:09:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/17 13:09:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 13:09:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 13:09:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 13:09:10 http.go:106: HTTPS: listening on [::]:9095\n2020/08/17 13:12:44 reverseproxy.go:447: http: proxy error: context canceled\n2020/08/17 13:12:45 reverseproxy.go:447: http: proxy error: context canceled\n2020/08/17 13:12:47 reverseproxy.go:447: http: proxy error: context canceled\n
Aug 17 13:21:06.309 E ns/openshift-monitoring pod/kube-state-metrics-845c77b776-lrnjs node/ip-10-0-136-189.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Aug 17 13:21:06.379 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-189.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-17T13:09:26.189Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-17T13:09:26.192Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-17T13:09:26.193Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-17T13:09:26.194Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-17T13:09:26.194Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-17T13:09:26.194Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-17T13:09:26.195Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-17T13:09:26.195Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-17
Aug 17 13:21:06.379 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-189.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/17 13:09:29 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 17 13:21:06.379 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-189.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/17 13:09:29 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/17 13:09:29 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 13:09:29 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 13:09:29 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/17 13:09:29 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 13:09:29 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/17 13:09:29 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 13:09:29 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/17 13:09:29 http.go:106: HTTPS: listening on [::]:9091\n2020/08/17 13:09:47 oauthproxy.go:774: basicauth: 10.131.0.30:60890 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/17 13:12:45 reverseproxy.go:447: http: proxy error: context canceled\n2020/08/17 13:14:17 oauthproxy.go:774: basicauth: 10.131.0.30:37688 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/17 13:18:47 oauthproxy.go:774: basicauth: 10.131.0.30:42760 Authorization header does not start with 'Basic', skipping basic authentication\n
Aug 17 13:21:06.379 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-189.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-17T13:09:28.839374107Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-08-17T13:09:28.839500087Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-17T13:09:28.841588635Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-17T13:09:33.976034801Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 17 13:21:09.130 E ns/openshift-machine-config-operator pod/machine-config-server-fhxlr node/ip-10-0-129-220.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0817 12:41:04.442917       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0817 12:41:04.444114       1 api.go:56] Launching server on :22624\nI0817 12:41:04.444250       1 api.go:56] Launching server on :22623\nI0817 12:46:27.761539       1 api.go:102] Pool worker requested by 10.0.159.34:5778\n
Aug 17 13:21:16.913 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-151-62.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-17T13:21:14.325Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-17T13:21:14.331Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-17T13:21:14.331Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-17T13:21:14.335Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-17T13:21:14.335Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-17T13:21:14.335Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-17T13:21:14.335Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-17T13:21:14.335Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-17T13:21:14.335Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-17T13:21:14.335Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-17T13:21:14.335Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-17T13:21:14.335Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-17T13:21:14.335Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-17T13:21:14.336Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-17T13:21:14.336Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-17T13:21:14.336Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-17
Aug 17 13:21:34.199 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Aug 17 13:22:02.643 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:22:37.315 E ns/openshift-marketplace pod/community-operators-65d4c84ccb-bn7x6 node/ip-10-0-151-62.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Aug 17 13:22:40.967 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Aug 17 13:22:49.416 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Grafana host: getting Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io grafana)
Aug 17 13:23:44.886 E ns/openshift-cluster-node-tuning-operator pod/tuned-5f8wp node/ip-10-0-136-189.us-west-1.compute.internal container=tuned container exited with code 143 (Error): t-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:17:53.172398   55799 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:17:53.286326   55799 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0817 13:21:07.348963   55799 openshift-tuned.go:550] Pod (openshift-console/downloads-69c645b6d-gh4zj) labels changed node wide: true\nI0817 13:21:08.170825   55799 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:21:08.172396   55799 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:21:08.303402   55799 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0817 13:21:11.284828   55799 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-5dd5556d7f-5qjwx) labels changed node wide: true\nI0817 13:21:13.171330   55799 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:21:13.172814   55799 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:21:13.292704   55799 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0817 13:21:41.240043   55799 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-8973/foo-6684t) labels changed node wide: false\nI0817 13:21:41.268967   55799 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-8973/foo-5p764) labels changed node wide: true\nI0817 13:21:43.170844   55799 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:21:43.172132   55799 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:21:43.289720   55799 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0817 13:22:01.241485   55799 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-7880/service-test-xmg8l) labels changed node wide: true\n
Aug 17 13:23:44.939 E ns/openshift-monitoring pod/node-exporter-k8l65 node/ip-10-0-136-189.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-17T13:09:15Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:15Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 17 13:23:44.960 E ns/openshift-sdn pod/ovs-m55pq node/ip-10-0-136-189.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): in the last 0 s (4 deletes)\n2020-08-17T13:21:05.404Z|00185|bridge|INFO|bridge br0: deleted interface veth0eaccef6 on port 12\n2020-08-17T13:21:05.451Z|00186|connmgr|INFO|br0<->unix#556: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:05.496Z|00187|connmgr|INFO|br0<->unix#559: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:05.529Z|00188|bridge|INFO|bridge br0: deleted interface veth2f8d10de on port 17\n2020-08-17T13:21:05.576Z|00189|connmgr|INFO|br0<->unix#562: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:05.615Z|00190|connmgr|INFO|br0<->unix#565: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:05.649Z|00191|bridge|INFO|bridge br0: deleted interface vethd1e439f8 on port 4\n2020-08-17T13:21:05.711Z|00192|connmgr|INFO|br0<->unix#568: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:05.750Z|00193|connmgr|INFO|br0<->unix#571: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:05.798Z|00194|bridge|INFO|bridge br0: deleted interface veth5b68cd2b on port 6\n2020-08-17T13:21:33.848Z|00195|connmgr|INFO|br0<->unix#595: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:33.876Z|00196|connmgr|INFO|br0<->unix#598: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:33.901Z|00197|bridge|INFO|bridge br0: deleted interface veth8800213a on port 13\n2020-08-17T13:21:33.936Z|00198|connmgr|INFO|br0<->unix#601: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:33.981Z|00199|connmgr|INFO|br0<->unix#604: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:34.008Z|00200|bridge|INFO|bridge br0: deleted interface vethf0df5be1 on port 3\n2020-08-17T13:21:49.858Z|00201|connmgr|INFO|br0<->unix#619: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:49.886Z|00202|connmgr|INFO|br0<->unix#622: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:49.907Z|00203|bridge|INFO|bridge br0: deleted interface veth3ab947cb on port 18\n2020-08-17 13:22:02 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 17 13:23:44.990 E ns/openshift-multus pod/multus-8xjq7 node/ip-10-0-136-189.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 17 13:23:45.026 E ns/openshift-machine-config-operator pod/machine-config-daemon-gwgxm node/ip-10-0-136-189.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 17 13:23:49.960 E ns/openshift-multus pod/multus-8xjq7 node/ip-10-0-136-189.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:23:52.965 E ns/openshift-machine-config-operator pod/machine-config-daemon-gwgxm node/ip-10-0-136-189.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 17 13:24:01.526 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): ing_queue.go:346] Unable to find backoff value for pod e2e-k8s-service-lb-available-7880/service-test-kfphf in backoffQ\nI0817 13:21:34.225234       1 factory.go:545] Unable to schedule e2e-k8s-service-lb-available-7880/service-test-kfphf: no fit: 0/5 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) had taints that the pod didn't tolerate, 2 node(s) were unschedulable.; waiting\nE0817 13:21:35.224749       1 scheduling_queue.go:346] Unable to find backoff value for pod openshift-machine-config-operator/etcd-quorum-guard-584599966c-fkw8m in backoffQ\nI0817 13:21:35.225556       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-584599966c-fkw8m: no fit: 0/5 nodes are available: 1 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0817 13:21:41.240110       1 factory.go:545] Unable to schedule openshift-ingress/router-default-75d7f86b4b-wnxhq: no fit: 0/5 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) didn't match node selector, 2 node(s) were unschedulable.; waiting\nI0817 13:21:41.246937       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-584599966c-fkw8m: no fit: 0/5 nodes are available: 1 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0817 13:21:41.252554       1 factory.go:545] Unable to schedule e2e-k8s-service-lb-available-7880/service-test-kfphf: no fit: 0/5 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) had taints that the pod didn't tolerate, 2 node(s) were unschedulable.; waiting\n
Aug 17 13:24:01.717 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): compacted\nE0817 13:21:43.712068       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:21:43.712193       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0817 13:21:43.872888       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-129-220.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0817 13:21:43.873019       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0817 13:21:43.905019       1 healthz.go:191] [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-discovery-available ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/openshift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-kubernetes-informers-synched ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nhealthz check failed\n
Aug 17 13:24:01.717 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0817 13:03:09.625569       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Aug 17 13:24:01.717 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0817 13:14:53.305415       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:14:53.305862       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0817 13:14:53.514350       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:14:53.514696       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Aug 17 13:24:01.779 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=cluster-policy-controller-9 container exited with code 1 (Error): I0817 13:05:29.846724       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0817 13:05:29.848627       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0817 13:05:29.848747       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Aug 17 13:24:01.779 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-9 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:20:29.588563       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:20:29.589030       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:20:39.600036       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:20:39.600379       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:20:49.611511       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:20:49.611856       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:20:59.625104       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:20:59.625797       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:21:09.638796       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:21:09.639228       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:21:19.664261       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:21:19.664562       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:21:29.675626       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:21:29.676044       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:21:39.686630       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:21:39.686999       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Aug 17 13:24:01.779 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-220.us-west-1.compute.internal node/ip-10-0-129-220.us-west-1.compute.internal container=kube-controller-manager-9 container exited with code 2 (Error): :33.847488       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0817 13:04:38.971460       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0817 13:04:39.572429       1 webhook.go:107] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0817 13:04:39.572476       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0817 13:04:41.324138       1 webhook.go:107] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0817 13:04:41.324172       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0817 13:04:43.332769       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0817 13:04:52.220255       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Aug 17 13:24:01.988 E ns/openshift-cluster-node-tuning-operator pod/tuned-v8xz9 node/ip-10-0-129-220.us-west-1.compute.internal container=tuned container exited with code 143 (Error):    73216 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-4-ip-10-0-129-220.us-west-1.compute.internal) labels changed node wide: false\nI0817 13:21:08.818734   73216 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-2-ip-10-0-129-220.us-west-1.compute.internal) labels changed node wide: false\nI0817 13:21:09.000798   73216 openshift-tuned.go:550] Pod (openshift-machine-api/cluster-autoscaler-operator-5956778c6b-xv2rl) labels changed node wide: true\nI0817 13:21:13.176104   73216 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:21:13.180905   73216 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:21:13.353850   73216 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0817 13:21:22.561104   73216 openshift-tuned.go:550] Pod (openshift-image-registry/cluster-image-registry-operator-848d65bc67-qfrbc) labels changed node wide: true\nI0817 13:21:23.175981   73216 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:21:23.177533   73216 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:21:23.302830   73216 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0817 13:21:42.559139   73216 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-694c8d685c-c29gh) labels changed node wide: true\nI0817 13:21:43.175996   73216 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:21:43.177716   73216 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:21:43.307784   73216 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0817 13:21:43.689032   73216 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-129-220.us-west-1.compute.internal) labels changed node wide: true\n
Aug 17 13:24:02.033 E ns/openshift-monitoring pod/node-exporter-r2b85 node/ip-10-0-129-220.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-17T13:09:23Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:23Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 17 13:24:02.057 E ns/openshift-controller-manager pod/controller-manager-m5qrh node/ip-10-0-129-220.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Aug 17 13:24:02.104 E ns/openshift-sdn pod/ovs-cqdrl node/ip-10-0-129-220.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): 3:21:07.300Z|00175|connmgr|INFO|br0<->unix#529: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:07.402Z|00176|bridge|INFO|bridge br0: deleted interface vethae66cdb2 on port 8\n2020-08-17T13:21:07.528Z|00177|connmgr|INFO|br0<->unix#532: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:07.588Z|00178|connmgr|INFO|br0<->unix#535: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:07.664Z|00179|bridge|INFO|bridge br0: deleted interface vethbba34a5e on port 6\n2020-08-17T13:21:07.721Z|00180|connmgr|INFO|br0<->unix#538: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:07.826Z|00181|connmgr|INFO|br0<->unix#541: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:07.884Z|00182|bridge|INFO|bridge br0: deleted interface vethcf2551eb on port 16\n2020-08-17T13:21:07.943Z|00183|connmgr|INFO|br0<->unix#544: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:07.991Z|00184|connmgr|INFO|br0<->unix#547: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:08.029Z|00185|bridge|INFO|bridge br0: deleted interface vethb21904e0 on port 14\n2020-08-17T13:21:09.984Z|00186|connmgr|INFO|br0<->unix#550: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:10.029Z|00187|connmgr|INFO|br0<->unix#553: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:10.061Z|00188|bridge|INFO|bridge br0: deleted interface veth9d864e03 on port 7\n2020-08-17T13:21:29.972Z|00189|connmgr|INFO|br0<->unix#571: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:21:30.002Z|00190|connmgr|INFO|br0<->unix#574: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:21:30.029Z|00191|bridge|INFO|bridge br0: deleted interface vethb167aa31 on port 15\n2020-08-17 13:21:43 info: Saving flows ...\n2020-08-17T13:21:43Z|00001|jsonrpc|WARN|unix:/var/run/openvswitch/db.sock: receive error: Connection reset by peer\n2020-08-17T13:21:43Z|00002|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection dropped (Connection reset by peer)\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection reset by peer)\n
Aug 17 13:24:02.123 E ns/openshift-sdn pod/sdn-controller-jttjc node/ip-10-0-129-220.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0817 13:11:54.228617       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 17 13:24:02.175 E ns/openshift-multus pod/multus-admission-controller-vx6pk node/ip-10-0-129-220.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Aug 17 13:24:02.206 E ns/openshift-multus pod/multus-6n28z node/ip-10-0-129-220.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 17 13:24:02.286 E ns/openshift-machine-config-operator pod/machine-config-daemon-k94cf node/ip-10-0-129-220.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 17 13:24:02.318 E ns/openshift-machine-config-operator pod/machine-config-server-zxpkf node/ip-10-0-129-220.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0817 13:21:14.089319       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0817 13:21:14.091320       1 api.go:56] Launching server on :22624\nI0817 13:21:14.092014       1 api.go:56] Launching server on :22623\n
Aug 17 13:24:03.901 E ns/openshift-monitoring pod/thanos-querier-5dd5556d7f-sx2h6 node/ip-10-0-151-62.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/08/17 13:08:38 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/17 13:08:38 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 13:08:38 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 13:08:38 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/17 13:08:38 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 13:08:38 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/17 13:08:38 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 13:08:38 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/17 13:08:38 http.go:106: HTTPS: listening on [::]:9091\n
Aug 17 13:24:04.981 E ns/openshift-marketplace pod/community-operators-76655bdf48-ljzml node/ip-10-0-151-62.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Aug 17 13:24:06.017 E ns/openshift-monitoring pod/kube-state-metrics-845c77b776-f89gm node/ip-10-0-151-62.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Aug 17 13:24:06.038 E ns/openshift-monitoring pod/prometheus-adapter-5c56c6756d-2j25h node/ip-10-0-151-62.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0817 13:21:07.308950       1 adapter.go:93] successfully using in-cluster auth\nI0817 13:21:09.150978       1 secure_serving.go:116] Serving securely on [::]:6443\n
Aug 17 13:24:06.065 E ns/openshift-monitoring pod/grafana-c99d789c5-wbqql node/ip-10-0-151-62.us-west-1.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Aug 17 13:24:06.144 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-62.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/17 13:09:19 Watching directory: "/etc/alertmanager/config"\n
Aug 17 13:24:06.144 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-62.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/17 13:09:19 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 13:09:19 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 13:09:19 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 13:09:19 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/17 13:09:19 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 13:09:19 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 13:09:19 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 13:09:19 http.go:106: HTTPS: listening on [::]:9095\n2020/08/17 13:12:47 reverseproxy.go:447: http: proxy error: context canceled\n
Aug 17 13:24:06.192 E ns/openshift-monitoring pod/openshift-state-metrics-667c6957bf-lwds5 node/ip-10-0-151-62.us-west-1.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Aug 17 13:24:06.215 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-62.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/17 13:21:15 Watching directory: "/etc/alertmanager/config"\n
Aug 17 13:24:06.215 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-62.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/17 13:21:15 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 13:21:15 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 13:21:15 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 13:21:15 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/17 13:21:15 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 13:21:15 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 13:21:15 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 13:21:15 http.go:106: HTTPS: listening on [::]:9095\n
Aug 17 13:24:06.245 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-151-62.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/17 13:21:14 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 17 13:24:06.245 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-151-62.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/17 13:21:15 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/17 13:21:15 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 13:21:15 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 13:21:15 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/17 13:21:15 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 13:21:15 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/17 13:21:15 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 13:21:15 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/17 13:21:15 http.go:106: HTTPS: listening on [::]:9091\n
Aug 17 13:24:06.245 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-151-62.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-17T13:21:14.499062362Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-08-17T13:21:14.499202538Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-17T13:21:14.500634363Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-17T13:21:19.854650385Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 17 13:24:06.283 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-151-62.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/17 13:09:34 Watching directory: "/etc/alertmanager/config"\n
Aug 17 13:24:06.283 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-151-62.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/17 13:09:34 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 13:09:34 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 13:09:34 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 13:09:34 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/17 13:09:34 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 13:09:34 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/17 13:09:34 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 13:09:34 http.go:106: HTTPS: listening on [::]:9095\n2020/08/17 13:12:51 reverseproxy.go:447: http: proxy error: context canceled\n2020/08/17 13:14:13 reverseproxy.go:447: http: proxy error: context canceled\n
Aug 17 13:24:06.305 E ns/openshift-ingress pod/router-default-75d7f86b4b-dx4dz node/ip-10-0-151-62.us-west-1.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:22:00.654795       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:22:20.332857       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:22:25.313704       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:22:30.296514       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:22:35.593325       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:22:40.591608       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:22:45.589434       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:23:48.893225       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:23:53.884915       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0817 13:23:58.885260       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Aug 17 13:24:06.325 E ns/openshift-marketplace pod/redhat-operators-66bb6bc75-br78p node/ip-10-0-151-62.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Aug 17 13:24:07.130 E ns/openshift-monitoring pod/telemeter-client-55479588c4-kcjjj node/ip-10-0-151-62.us-west-1.compute.internal container=reload container exited with code 2 (Error): 
Aug 17 13:24:07.130 E ns/openshift-monitoring pod/telemeter-client-55479588c4-kcjjj node/ip-10-0-151-62.us-west-1.compute.internal container=telemeter-client container exited with code 2 (Error): 
Aug 17 13:24:07.318 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-62.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/17 13:09:02 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 17 13:24:07.318 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-62.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/17 13:09:02 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/17 13:09:02 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 13:09:02 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 13:09:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/17 13:09:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 13:09:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/17 13:09:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 13:09:03 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/17 13:09:03 http.go:106: HTTPS: listening on [::]:9091\n2020/08/17 13:12:46 reverseproxy.go:447: http: proxy error: context canceled\n2020/08/17 13:21:10 oauthproxy.go:774: basicauth: 10.128.2.48:57790 Authorization header does not start with 'Basic', skipping basic authentication\n
Aug 17 13:24:07.318 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-62.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-17T13:08:59.810573024Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-08-17T13:08:59.810725164Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-17T13:08:59.812423886Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-17T13:09:04.940052931Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 17 13:24:09.192 E ns/openshift-monitoring pod/node-exporter-r2b85 node/ip-10-0-129-220.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:24:11.845 E ns/openshift-multus pod/multus-6n28z node/ip-10-0-129-220.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:24:12.762 E ns/openshift-machine-config-operator pod/machine-config-daemon-k94cf node/ip-10-0-129-220.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 17 13:24:18.757 E ns/openshift-multus pod/multus-6n28z node/ip-10-0-129-220.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:24:26.691 E ns/openshift-console pod/console-6798bcf7db-p66dp node/ip-10-0-156-63.us-west-1.compute.internal container=console container exited with code 2 (Error): /main: cookies are secure!\n2020/08/17 13:10:40 cmd/main: Binding to [::]:8443...\n2020/08/17 13:10:40 cmd/main: using TLS\n2020/08/17 13:11:56 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n2020/08/17 13:12:33 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/08/17 13:12:38 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/08/17 13:12:50 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n2020/08/17 13:12:57 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Aug 17 13:24:30.302 E ns/openshift-monitoring pod/thanos-querier-5dd5556d7f-86j8b node/ip-10-0-156-63.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/08/17 13:24:10 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/17 13:24:10 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/17 13:24:10 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/17 13:24:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/17 13:24:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/17 13:24:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/17 13:24:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/17 13:24:10 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/17 13:24:10 http.go:106: HTTPS: listening on [::]:9091\n
Aug 17 13:24:31.287 E ns/openshift-machine-api pod/machine-api-operator-7794f5df67-45tk5 node/ip-10-0-156-63.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Aug 17 13:24:52.662 E kube-apiserver failed contacting the API: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=34978&timeout=5m42s&timeoutSeconds=342&watch=true: dial tcp 54.193.215.168:6443: connect: connection refused
Aug 17 13:24:52.806 E kube-apiserver Kube API started failing: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: dial tcp 54.193.232.103:6443: connect: connection refused
Aug 17 13:24:53.161 E clusteroperator/network changed Degraded to True: ApplyOperatorConfig: Error while updating operator configuration: could not apply (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster: could not retrieve existing (/v1, Kind=ConfigMap) openshift-network-operator/applied-cluster: Get https://api-int.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-network-operator/configmaps/applied-cluster: unexpected EOF
Aug 17 13:24:59.643 E kube-apiserver Kube API started failing: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded
Aug 17 13:25:02.643 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:25:41.599 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Aug 17 13:25:46.097 E ns/openshift-marketplace pod/certified-operators-5d5787874d-6xgmv node/ip-10-0-136-189.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Aug 17 13:26:02.160 E ns/openshift-marketplace pod/community-operators-76655bdf48-m7kxk node/ip-10-0-136-189.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Aug 17 13:26:46.952 E ns/openshift-cluster-node-tuning-operator pod/tuned-sjv8z node/ip-10-0-151-62.us-west-1.compute.internal container=tuned container exited with code 143 (Error): t-marketplace/community-operators-65d4c84ccb-bn7x6) labels changed node wide: true\nI0817 13:22:44.610597   77444 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:22:44.614338   77444 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:22:44.749948   77444 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0817 13:24:01.014785   77444 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-4590/dp-657fc4b57d-b856k) labels changed node wide: true\nI0817 13:24:04.610998   77444 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:24:04.617670   77444 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:24:04.884462   77444 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0817 13:24:11.358564   77444 openshift-tuned.go:550] Pod (openshift-marketplace/community-operators-76655bdf48-ljzml) labels changed node wide: true\nI0817 13:24:14.610542   77444 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:24:14.612083   77444 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:24:14.738880   77444 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0817 13:24:41.358258   77444 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-8973/foo-8s6lk) labels changed node wide: false\nI0817 13:24:41.389528   77444 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-8973/foo-s2wqz) labels changed node wide: true\nI0817 13:24:44.610539   77444 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:24:44.612159   77444 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:24:44.742631   77444 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n
Aug 17 13:26:46.987 E ns/openshift-monitoring pod/node-exporter-znlms node/ip-10-0-151-62.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-17T13:08:46Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-17T13:08:46Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 17 13:26:47.026 E ns/openshift-sdn pod/ovs-vvkm8 node/ip-10-0-151-62.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): ace vethd6e6756d on port 7\n2020-08-17T13:24:05.686Z|00264|connmgr|INFO|br0<->unix#800: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:05.723Z|00265|connmgr|INFO|br0<->unix#803: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:24:05.748Z|00266|bridge|INFO|bridge br0: deleted interface vethfeada40d on port 31\n2020-08-17T13:24:05.791Z|00267|connmgr|INFO|br0<->unix#806: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:05.845Z|00268|connmgr|INFO|br0<->unix#809: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:24:05.875Z|00269|bridge|INFO|bridge br0: deleted interface vethe8f3dfa0 on port 11\n2020-08-17T13:24:05.922Z|00270|connmgr|INFO|br0<->unix#812: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:05.959Z|00271|connmgr|INFO|br0<->unix#815: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:24:05.988Z|00272|bridge|INFO|bridge br0: deleted interface veth9b982fd4 on port 29\n2020-08-17T13:24:08.070Z|00025|jsonrpc|WARN|unix#726: receive error: Connection reset by peer\n2020-08-17T13:24:08.070Z|00026|reconnect|WARN|unix#726: connection dropped (Connection reset by peer)\n2020-08-17T13:24:31.515Z|00273|connmgr|INFO|br0<->unix#836: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:31.543Z|00274|connmgr|INFO|br0<->unix#839: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:24:31.564Z|00275|bridge|INFO|bridge br0: deleted interface veth33bc5892 on port 26\n2020-08-17T13:24:31.597Z|00276|connmgr|INFO|br0<->unix#842: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:31.641Z|00277|connmgr|INFO|br0<->unix#845: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:24:31.663Z|00278|bridge|INFO|bridge br0: deleted interface veth2a3746c2 on port 28\n2020-08-17T13:24:51.323Z|00279|connmgr|INFO|br0<->unix#863: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:51.354Z|00280|connmgr|INFO|br0<->unix#866: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:24:51.374Z|00281|bridge|INFO|bridge br0: deleted interface veth7151ab86 on port 17\n2020-08-17 13:25:00 info: Saving flows ...\n
Aug 17 13:26:47.049 E ns/openshift-multus pod/multus-pbnr6 node/ip-10-0-151-62.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 17 13:26:47.110 E ns/openshift-machine-config-operator pod/machine-config-daemon-2jgnq node/ip-10-0-151-62.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 17 13:26:52.209 E ns/openshift-multus pod/multus-pbnr6 node/ip-10-0-151-62.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:26:53.324 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-189.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-17T13:26:51.725Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-17T13:26:51.729Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-17T13:26:51.731Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-17T13:26:51.732Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-17T13:26:51.732Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-17T13:26:51.732Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-17T13:26:51.732Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-17T13:26:51.732Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-17T13:26:51.732Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-17T13:26:51.732Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-17T13:26:51.732Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-17T13:26:51.732Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-17T13:26:51.732Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-17T13:26:51.732Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-17T13:26:51.733Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-17T13:26:51.733Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-17
Aug 17 13:26:55.277 E ns/openshift-machine-config-operator pod/machine-config-daemon-2jgnq node/ip-10-0-151-62.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 17 13:27:06.626 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator ingress is reporting a failure: Some ingresscontrollers are degraded: default
Aug 17 13:27:07.512 E ns/openshift-cluster-node-tuning-operator pod/tuned-n57mj node/ip-10-0-156-63.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 2   72414 openshift-tuned.go:550] Pod (openshift-cloud-credential-operator/cloud-credential-operator-5cc76df8d7-zkvl4) labels changed node wide: true\nI0817 13:24:28.429947   72414 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:24:28.431742   72414 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:24:28.767045   72414 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0817 13:24:28.769972   72414 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-5-ip-10-0-156-63.us-west-1.compute.internal) labels changed node wide: false\nI0817 13:24:29.170867   72414 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-7-ip-10-0-156-63.us-west-1.compute.internal) labels changed node wide: true\nI0817 13:24:33.431884   72414 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:24:33.434319   72414 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:24:33.637530   72414 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0817 13:24:37.863931   72414 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-7-ip-10-0-156-63.us-west-1.compute.internal) labels changed node wide: false\nI0817 13:24:39.206278   72414 openshift-tuned.go:550] Pod (openshift-machine-api/machine-api-controllers-5ff486dd5b-hrfmw) labels changed node wide: true\nI0817 13:24:43.429494   72414 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:24:43.430971   72414 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:24:43.586766   72414 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0817 13:24:51.549786   72414 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-694c8d685c-4wwcd) labels changed node wide: true\n
Aug 17 13:27:07.528 E ns/openshift-monitoring pod/node-exporter-dtgn2 node/ip-10-0-156-63.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-17T13:09:03Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:03Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 17 13:27:07.580 E ns/openshift-controller-manager pod/controller-manager-4z9rk node/ip-10-0-156-63.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Aug 17 13:27:07.590 E ns/openshift-sdn pod/sdn-controller-btrgh node/ip-10-0-156-63.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0817 13:11:33.034667       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 17 13:27:07.605 E ns/openshift-sdn pod/ovs-rf4wv node/ip-10-0-156-63.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): 17|connmgr|INFO|br0<->unix#817: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-08-17T13:24:33.498Z|00218|connmgr|INFO|br0<->unix#819: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:36.563Z|00219|connmgr|INFO|br0<->unix#822: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:36.596Z|00220|connmgr|INFO|br0<->unix#825: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:24:36.623Z|00221|bridge|INFO|bridge br0: deleted interface vethcd5bb4c2 on port 25\n2020-08-17T13:24:38.697Z|00222|bridge|INFO|bridge br0: added interface vethb2bdfa87 on port 26\n2020-08-17T13:24:38.733Z|00223|connmgr|INFO|br0<->unix#831: 5 flow_mods in the last 0 s (5 adds)\n2020-08-17T13:24:38.787Z|00224|connmgr|INFO|br0<->unix#835: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:38.788Z|00225|connmgr|INFO|br0<->unix#837: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-08-17T13:24:41.613Z|00226|connmgr|INFO|br0<->unix#840: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:41.650Z|00227|connmgr|INFO|br0<->unix#843: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:24:41.687Z|00228|bridge|INFO|bridge br0: deleted interface vethb2bdfa87 on port 26\n2020-08-17T13:24:41.669Z|00015|jsonrpc|WARN|unix#739: receive error: Connection reset by peer\n2020-08-17T13:24:41.669Z|00016|reconnect|WARN|unix#739: connection dropped (Connection reset by peer)\n2020-08-17T13:24:50.238Z|00229|connmgr|INFO|br0<->unix#855: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:24:50.293Z|00230|connmgr|INFO|br0<->unix#858: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:24:50.327Z|00231|bridge|INFO|bridge br0: deleted interface vethff026ee8 on port 14\n2020-08-17 13:24:52 info: Saving flows ...\n2020-08-17T13:24:52Z|00001|jsonrpc|WARN|unix:/var/run/openvswitch/db.sock: receive error: Connection reset by peer\n2020-08-17T13:24:52Z|00002|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection dropped (Connection reset by peer)\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection reset by peer)\n
Aug 17 13:27:07.617 E ns/openshift-multus pod/multus-8cd7q node/ip-10-0-156-63.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 17 13:27:07.629 E ns/openshift-multus pod/multus-admission-controller-b6bgj node/ip-10-0-156-63.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Aug 17 13:27:07.658 E ns/openshift-machine-config-operator pod/machine-config-daemon-rrt87 node/ip-10-0-156-63.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 17 13:27:07.691 E ns/openshift-machine-config-operator pod/machine-config-server-q442j node/ip-10-0-156-63.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0817 13:21:30.142038       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0817 13:21:30.143680       1 api.go:56] Launching server on :22624\nI0817 13:21:30.143758       1 api.go:56] Launching server on :22623\n
Aug 17 13:27:07.730 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): ired revision has been compacted\nE0817 13:24:51.934269       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934304       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934334       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934374       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934384       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934404       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934436       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934469       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934540       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934544       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934572       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934918       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0817 13:24:51.934964       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0817 13:24:52.192442       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-156-63.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0817 13:24:52.192485       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Aug 17 13:27:07.730 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0817 13:07:44.183230       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Aug 17 13:27:07.730 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0817 13:19:19.383468       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:19:19.383889       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0817 13:19:19.590425       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:19:19.590783       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Aug 17 13:27:07.786 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=cluster-policy-controller-9 container exited with code 1 (Error): /client-go/route/informers/externalversions/factory.go:101: Failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nI0817 13:09:28.340566       1 trace.go:81] Trace[1082126229]: "Reflector github.com/openshift/client-go/quota/informers/externalversions/factory.go:101 ListAndWatch" (started: 2020-08-17 13:09:12.718416719 +0000 UTC m=+58.877704737) (total time: 15.622041079s):\nTrace[1082126229]: [15.622018887s] [15.622018887s] Objects listed\nW0817 13:18:12.770128       1 reflector.go:289] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: The resourceVersion for the provided watch is too old.\nW0817 13:18:26.901209       1 reflector.go:289] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: The resourceVersion for the provided watch is too old.\nW0817 13:18:27.515612       1 reflector.go:289] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: The resourceVersion for the provided watch is too old.\nW0817 13:19:15.512417       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW0817 13:21:44.678230       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ServiceAccount ended with: too old resource version: 25728 (31920)\nW0817 13:21:44.679119       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.Ingress ended with: too old resource version: 25730 (31920)\nW0817 13:21:44.699871       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.Ingress ended with: too old resource version: 25730 (31920)\nW0817 13:21:44.703982       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.RoleBinding ended with: too old resource version: 25734 (31920)\n
Aug 17 13:27:07.786 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-9 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:23:39.836913       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:23:39.837286       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:23:49.847050       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:23:49.848026       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:23:59.860201       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:23:59.860665       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:24:09.871166       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:24:09.871650       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:24:19.884229       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:24:19.884593       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:24:29.927627       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:24:29.928013       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:24:39.944646       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:24:39.945022       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:24:49.955072       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:24:49.955541       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Aug 17 13:27:07.786 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=kube-controller-manager-9 container exited with code 2 (Error): 1304       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0817 13:09:18.269938       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, role.rbac.authorization.k8s.io "system:openshift:leader-election-lock-kube-controller-manager" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-controller-manager" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]\n
Aug 17 13:27:07.808 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-156-63.us-west-1.compute.internal node/ip-10-0-156-63.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): ted, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416940Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0817 13:24:43.322017       1 factory.go:545] Unable to schedule openshift-ingress/router-default-75d7f86b4b-ntwq5: no fit: 0/5 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) didn't match node selector, 2 node(s) were unschedulable.; waiting\nI0817 13:24:44.321141       1 factory.go:545] Unable to schedule e2e-k8s-service-lb-available-7880/service-test-zxdlm: no fit: 0/5 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) had taints that the pod didn't tolerate, 2 node(s) were unschedulable.; waiting\nI0817 13:24:47.241302       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-6d6fb58d4d-g9ht8 is bound successfully on node "ip-10-0-131-26.us-west-1.compute.internal", 5 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416940Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0817 13:24:48.322394       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-584599966c-7csg8: no fit: 0/5 nodes are available: 1 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0817 13:24:49.322653       1 factory.go:545] Unable to schedule e2e-k8s-service-lb-available-7880/service-test-zxdlm: no fit: 0/5 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) had taints that the pod didn't tolerate, 2 node(s) were unschedulable.; waiting\n
Aug 17 13:27:13.457 E ns/openshift-monitoring pod/node-exporter-dtgn2 node/ip-10-0-156-63.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:27:13.476 E ns/openshift-multus pod/multus-8cd7q node/ip-10-0-156-63.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:27:15.748 E ns/openshift-multus pod/multus-8cd7q node/ip-10-0-156-63.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:27:17.848 E ns/openshift-machine-config-operator pod/machine-config-daemon-rrt87 node/ip-10-0-156-63.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 17 13:27:21.312 E ns/openshift-multus pod/multus-8cd7q node/ip-10-0-156-63.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:27:33.696 E ns/openshift-cluster-machine-approver pod/machine-approver-866b44d698-l4frp node/ip-10-0-131-26.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error):  nor --master was specified.  Using the inClusterConfig.  This might not work.\nI0817 13:08:16.494379       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0817 13:08:16.494450       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0817 13:08:16.497962       1 main.go:236] Starting Machine Approver\nI0817 13:08:16.598297       1 main.go:146] CSR csr-8lb79 added\nI0817 13:08:16.598333       1 main.go:149] CSR csr-8lb79 is already approved\nI0817 13:08:16.598352       1 main.go:146] CSR csr-dndqk added\nI0817 13:08:16.598361       1 main.go:149] CSR csr-dndqk is already approved\nI0817 13:08:16.598390       1 main.go:146] CSR csr-vz2nd added\nI0817 13:08:16.598399       1 main.go:149] CSR csr-vz2nd is already approved\nI0817 13:08:16.598413       1 main.go:146] CSR csr-wwk2g added\nI0817 13:08:16.598422       1 main.go:149] CSR csr-wwk2g is already approved\nI0817 13:08:16.598435       1 main.go:146] CSR csr-vkx6g added\nI0817 13:08:16.598444       1 main.go:149] CSR csr-vkx6g is already approved\nI0817 13:08:16.598457       1 main.go:146] CSR csr-4dk5l added\nI0817 13:08:16.598467       1 main.go:149] CSR csr-4dk5l is already approved\nI0817 13:08:16.598480       1 main.go:146] CSR csr-57ckk added\nI0817 13:08:16.598490       1 main.go:149] CSR csr-57ckk is already approved\nI0817 13:08:16.598504       1 main.go:146] CSR csr-g8mxh added\nI0817 13:08:16.598513       1 main.go:149] CSR csr-g8mxh is already approved\nI0817 13:08:16.598526       1 main.go:146] CSR csr-phtqk added\nI0817 13:08:16.598535       1 main.go:149] CSR csr-phtqk is already approved\nI0817 13:08:16.598554       1 main.go:146] CSR csr-skkzq added\nI0817 13:08:16.598708       1 main.go:149] CSR csr-skkzq is already approved\nW0817 13:25:01.086059       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 22530 (35033)\n
Aug 17 13:27:34.220 E ns/openshift-console-operator pod/console-operator-7bbccff89b-hcjhq node/ip-10-0-131-26.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): onTime":"2020-08-17T12:45:13Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0817 13:25:54.848462       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"844e572c-2248-4226-85f7-b3c7eb309f13", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "RouteSyncDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)" to ""\nE0817 13:27:30.916308       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-0.test-2020-08-17-122211-ci-op-ym6wq58d\nI0817 13:27:30.971524       1 status_controller.go:175] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-08-17T12:45:13Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-08-17T13:10:46Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-08-17T13:27:30Z","message":"DeploymentAvailable: 1 replicas ready at version 0.0.1-0.test-2020-08-17-122211-ci-op-ym6wq58d","reason":"Deployment_FailedUpdate","status":"False","type":"Available"},{"lastTransitionTime":"2020-08-17T12:45:13Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0817 13:27:31.016176       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"844e572c-2248-4226-85f7-b3c7eb309f13", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from True to False ("DeploymentAvailable: 1 replicas ready at version 0.0.1-0.test-2020-08-17-122211-ci-op-ym6wq58d")\nI0817 13:27:31.108212       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0817 13:27:31.108356       1 leaderelection.go:66] leaderelection lost\n
Aug 17 13:27:35.429 E ns/openshift-insights pod/insights-operator-848d9db8f-dmncn node/ip-10-0-131-26.us-west-1.compute.internal container=operator container exited with code 2 (Error): ith fingerprint=\nI0817 13:27:03.424849       1 diskrecorder.go:63] Recording config/configmaps/initial-kube-apiserver-server-ca/ca-bundle.crt with fingerprint=\nI0817 13:27:03.424887       1 diskrecorder.go:63] Recording config/configmaps/openshift-install/invoker with fingerprint=\nI0817 13:27:03.424900       1 diskrecorder.go:63] Recording config/configmaps/openshift-install/version with fingerprint=\nI0817 13:27:03.424907       1 diskrecorder.go:63] Recording config/configmaps/openshift-install-manifests/invoker with fingerprint=\nI0817 13:27:03.424913       1 diskrecorder.go:63] Recording config/configmaps/openshift-install-manifests/version with fingerprint=\nI0817 13:27:03.427859       1 diskrecorder.go:63] Recording config/version with fingerprint=\nI0817 13:27:03.427937       1 diskrecorder.go:63] Recording config/id with fingerprint=\nI0817 13:27:03.430575       1 diskrecorder.go:63] Recording config/infrastructure with fingerprint=\nI0817 13:27:03.432955       1 diskrecorder.go:63] Recording config/network with fingerprint=\nI0817 13:27:03.435367       1 diskrecorder.go:63] Recording config/authentication with fingerprint=\nI0817 13:27:03.437948       1 diskrecorder.go:63] Recording config/featuregate with fingerprint=\nI0817 13:27:03.440665       1 diskrecorder.go:63] Recording config/oauth with fingerprint=\nI0817 13:27:03.443564       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0817 13:27:03.445955       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0817 13:27:03.451923       1 diskrecorder.go:170] Writing 55 records to /var/lib/insights-operator/insights-2020-08-17-132703.tar.gz\nI0817 13:27:03.456146       1 diskrecorder.go:134] Wrote 55 records to disk in 4ms\nI0817 13:27:03.456194       1 periodic.go:151] Periodic gather config completed in 116ms\nI0817 13:27:20.129191       1 httplog.go:90] GET /metrics: (7.442363ms) 200 [Prometheus/2.14.0 10.131.0.24:38266]\nI0817 13:27:29.141147       1 httplog.go:90] GET /metrics: (1.699018ms) 200 [Prometheus/2.14.0 10.131.0.32:52624]\n
Aug 17 13:27:35.556 E ns/openshift-console pod/console-6798bcf7db-j67vm node/ip-10-0-131-26.us-west-1.compute.internal container=console container exited with code 2 (Error): -ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n2020/08/17 13:12:38 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n2020/08/17 13:12:45 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/08/17 13:12:53 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/08/17 13:21:44 auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: dial tcp 172.30.0.1:443: connect: connection refused\n2020/08/17 13:21:50 auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/08/17 13:21:55 auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: dial tcp: lookup kubernetes.default.svc on 172.30.0.10:53: no such host\n
Aug 17 13:27:35.579 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7f7977d8d7-7r2mk node/ip-10-0-131-26.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): controller.go:179] Syncing secrets: [{csr-signer false}]\\nI0817 13:24:49.955072       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0817 13:24:49.955541       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-156-63.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-156-63.us-west-1.compute.internal container=\"cluster-policy-controller-9\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-156-63.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-156-63.us-west-1.compute.internal container=\"kube-controller-manager-9\" is not ready"\nI0817 13:27:27.455641       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"54555485-e80f-46df-9ca4-bf97e8447b39", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-156-63.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-156-63.us-west-1.compute.internal container=\"cluster-policy-controller-9\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-156-63.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-156-63.us-west-1.compute.internal container=\"kube-controller-manager-9\" is not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-156-63.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-156-63.us-west-1.compute.internal container=\"cluster-policy-controller-9\" is not ready"\nI0817 13:27:34.033585       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0817 13:27:34.033796       1 leaderelection.go:66] leaderelection lost\n
Aug 17 13:27:36.800 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-7584f4cf55-2w9cx node/ip-10-0-131-26.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-7584f4cf55-2w9cx_e1795a00-3822-44e4-9bdc-b4afe369204c/kube-apiserver-operator/0.log": lstat /var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-7584f4cf55-2w9cx_e1795a00-3822-44e4-9bdc-b4afe369204c/kube-apiserver-operator/0.log: no such file or directory
Aug 17 13:27:40.076 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-866fb84cc5-kwsfl node/ip-10-0-131-26.us-west-1.compute.internal container=operator container exited with code 255 (Error): ft-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0817 13:27:01.190885       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0817 13:27:01.190927       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0817 13:27:01.192625       1 httplog.go:90] GET /metrics: (7.327301ms) 200 [Prometheus/2.14.0 10.131.0.24:39118]\nI0817 13:27:10.371288       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0817 13:27:15.862696       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0817 13:27:15.862729       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0817 13:27:15.864114       1 httplog.go:90] GET /metrics: (6.234322ms) 200 [Prometheus/2.14.0 10.131.0.32:60994]\nI0817 13:27:20.387105       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0817 13:27:30.511941       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0817 13:27:31.202037       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0817 13:27:31.202072       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0817 13:27:31.212626       1 httplog.go:90] GET /metrics: (28.012101ms) 200 [Prometheus/2.14.0 10.131.0.24:39118]\nI0817 13:27:36.441757       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 1 items received\nI0817 13:27:38.593195       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0817 13:27:38.597204       1 leaderelection.go:66] leaderelection lost\n
Aug 17 13:27:40.124 E ns/openshift-machine-config-operator pod/machine-config-controller-6bb576544-ht4g7 node/ip-10-0-131-26.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): .us-west-1.compute.internal is reporting Unschedulable\nI0817 13:26:58.120456       1 node_controller.go:442] Pool worker: node ip-10-0-151-62.us-west-1.compute.internal has completed update to rendered-worker-0877d83e4b4187eccf41694f18fa5def\nI0817 13:26:58.132870       1 node_controller.go:435] Pool worker: node ip-10-0-151-62.us-west-1.compute.internal is now reporting ready\nI0817 13:27:02.069334       1 status.go:82] Pool worker: All nodes are updated with rendered-worker-0877d83e4b4187eccf41694f18fa5def\nI0817 13:27:07.191085       1 node_controller.go:433] Pool master: node ip-10-0-156-63.us-west-1.compute.internal is now reporting unready: node ip-10-0-156-63.us-west-1.compute.internal is reporting NotReady=False\nI0817 13:27:16.796850       1 node_controller.go:433] Pool master: node ip-10-0-156-63.us-west-1.compute.internal is now reporting unready: node ip-10-0-156-63.us-west-1.compute.internal is reporting Unschedulable\nI0817 13:27:22.713031       1 node_controller.go:442] Pool master: node ip-10-0-156-63.us-west-1.compute.internal has completed update to rendered-master-4f57c3f6d8f8b3f089a4cfd71926cba7\nI0817 13:27:22.737457       1 node_controller.go:435] Pool master: node ip-10-0-156-63.us-west-1.compute.internal is now reporting ready\nI0817 13:27:27.713816       1 node_controller.go:758] Setting node ip-10-0-131-26.us-west-1.compute.internal to desired config rendered-master-4f57c3f6d8f8b3f089a4cfd71926cba7\nI0817 13:27:27.732196       1 node_controller.go:452] Pool master: node ip-10-0-131-26.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-4f57c3f6d8f8b3f089a4cfd71926cba7\nI0817 13:27:28.762025       1 node_controller.go:452] Pool master: node ip-10-0-131-26.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0817 13:27:28.787578       1 node_controller.go:433] Pool master: node ip-10-0-131-26.us-west-1.compute.internal is now reporting unready: node ip-10-0-131-26.us-west-1.compute.internal is reporting Unschedulable\n
Aug 17 13:27:48.436 E ns/openshift-cluster-node-tuning-operator pod/tuned-sjv8z node/ip-10-0-151-62.us-west-1.compute.internal container=tuned container exited with code 143 (Error): Failed to execute operation: Unit file tuned.service does not exist.\nI0817 13:26:50.840207    2630 openshift-tuned.go:209] Extracting tuned profiles\nI0817 13:26:50.883704    2630 openshift-tuned.go:739] Resync period to pull node/pod labels: 62 [s]\nE0817 13:26:55.090833    2630 openshift-tuned.go:881] Get https://172.30.0.1:443/api/v1/nodes/ip-10-0-151-62.us-west-1.compute.internal: dial tcp 172.30.0.1:443: connect: no route to host\nI0817 13:26:55.090892    2630 openshift-tuned.go:883] Increasing resyncPeriod to 124\n
Aug 17 13:28:06.122 E kube-apiserver failed contacting the API: Get https://api.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=37566&timeout=6m0s&timeoutSeconds=360&watch=true: dial tcp 54.193.232.103:6443: connect: connection refused
Aug 17 13:29:17.643 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 17 13:29:57.997 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Aug 17 13:30:23.209 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=cluster-policy-controller-9 container exited with code 1 (Error): I0817 13:06:52.449354       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0817 13:06:52.451093       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0817 13:06:52.451183       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0817 13:06:52.452681       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Aug 17 13:30:23.209 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-9 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:27:05.678543       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:27:05.678886       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:27:05.679467       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:27:05.679789       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:27:10.216323       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:27:10.217074       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:27:20.225927       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:27:20.226495       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:27:30.318884       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:27:30.321170       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:27:40.332666       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:27:40.333082       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:27:50.349969       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:27:50.350381       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0817 13:28:00.358250       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:28:00.358632       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Aug 17 13:30:23.209 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=kube-controller-manager-9 container exited with code 2 (Error): er_utils.go:602] Controller packageserver-6d6fb58d4d deleting pod openshift-operator-lifecycle-manager/packageserver-6d6fb58d4d-gpmc7\nI0817 13:27:57.518696       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"fdf413cd-69a9-46b6-ac8a-36bf461d6320", APIVersion:"apps/v1", ResourceVersion:"37447", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-6d6fb58d4d to 1\nI0817 13:27:57.535529       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"fdf413cd-69a9-46b6-ac8a-36bf461d6320", APIVersion:"apps/v1", ResourceVersion:"37447", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-6496d8c9d7 to 2\nI0817 13:27:57.535568       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-6d6fb58d4d", UID:"c639ed95-ba57-4090-b483-179e62763e8c", APIVersion:"apps/v1", ResourceVersion:"37469", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-6d6fb58d4d-gpmc7\nI0817 13:27:57.536816       1 replica_set.go:562] Too few replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-6496d8c9d7, need 2, creating 1\nI0817 13:27:57.565108       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\nI0817 13:27:57.708587       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-6496d8c9d7", UID:"a4edc1ca-7929-4d10-a95f-e222d7d5a6cb", APIVersion:"apps/v1", ResourceVersion:"37473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-6496d8c9d7-jqwgr\n
Aug 17 13:30:23.326 E ns/openshift-controller-manager pod/controller-manager-mx4lq node/ip-10-0-131-26.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Aug 17 13:30:23.344 E ns/openshift-monitoring pod/node-exporter-ct2ff node/ip-10-0-131-26.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-17T13:09:39Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-17T13:09:39Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 17 13:30:23.357 E ns/openshift-sdn pod/sdn-controller-ztwzb node/ip-10-0-131-26.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): ng{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-131-26\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-08-17T12:39:53Z\",\"renewTime\":\"2020-08-17T13:11:47Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-131-26 became leader'\nI0817 13:11:47.840720       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0817 13:11:47.849537       1 master.go:51] Initializing SDN master\nI0817 13:11:47.867120       1 network_controller.go:60] Started OpenShift Network Controller\nE0817 13:21:44.284898       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://api-int.ci-op-ym6wq58d-8ce5e.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=31900&timeout=8m34s&timeoutSeconds=514&watch=true: dial tcp 10.0.159.34:6443: connect: connection refused\nW0817 13:21:44.586387       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 24270 (25277)\nW0817 13:21:44.678993       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 24258 (25285)\nW0817 13:21:44.773289       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 21406 (31920)\nW0817 13:25:01.082037       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 25277 (35033)\n
Aug 17 13:30:23.371 E ns/openshift-multus pod/multus-admission-controller-qggtp node/ip-10-0-131-26.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Aug 17 13:30:23.390 E ns/openshift-sdn pod/ovs-ns9n2 node/ip-10-0-131-26.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): 0 s (2 deletes)\n2020-08-17T13:27:38.659Z|00302|connmgr|INFO|br0<->unix#1018: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:27:38.722Z|00303|bridge|INFO|bridge br0: deleted interface veth3d881120 on port 36\n2020-08-17T13:27:38.817Z|00304|connmgr|INFO|br0<->unix#1021: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:27:38.891Z|00305|connmgr|INFO|br0<->unix#1024: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:27:38.940Z|00306|bridge|INFO|bridge br0: deleted interface veth40764e18 on port 23\n2020-08-17T13:27:39.024Z|00307|connmgr|INFO|br0<->unix#1027: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:27:39.086Z|00308|connmgr|INFO|br0<->unix#1030: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:27:39.129Z|00309|bridge|INFO|bridge br0: deleted interface veth82703cdf on port 6\n2020-08-17T13:27:39.227Z|00310|connmgr|INFO|br0<->unix#1033: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:27:39.272Z|00311|connmgr|INFO|br0<->unix#1036: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:27:39.322Z|00312|bridge|INFO|bridge br0: deleted interface veth3f6b066a on port 34\n2020-08-17T13:27:39.497Z|00313|connmgr|INFO|br0<->unix#1039: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:27:39.532Z|00314|connmgr|INFO|br0<->unix#1042: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:27:39.571Z|00315|bridge|INFO|bridge br0: deleted interface vethe51f9052 on port 29\n2020-08-17T13:27:55.503Z|00316|connmgr|INFO|br0<->unix#1059: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-17T13:27:55.539Z|00317|connmgr|INFO|br0<->unix#1062: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-17T13:27:55.568Z|00318|bridge|INFO|bridge br0: deleted interface veth3911fec8 on port 32\n2020-08-17T13:28:01.200Z|00024|jsonrpc|WARN|unix#934: receive error: Connection reset by peer\n2020-08-17T13:28:01.200Z|00025|reconnect|WARN|unix#934: connection dropped (Connection reset by peer)\n2020-08-17 13:28:05 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 17 13:30:23.431 E ns/openshift-multus pod/multus-zknp9 node/ip-10-0-131-26.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 17 13:30:23.470 E ns/openshift-machine-config-operator pod/machine-config-daemon-vgjz9 node/ip-10-0-131-26.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 17 13:30:23.488 E ns/openshift-machine-config-operator pod/machine-config-server-bjqqk node/ip-10-0-131-26.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0817 13:21:06.930052       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0817 13:21:06.935753       1 api.go:56] Launching server on :22623\nI0817 13:21:06.935642       1 api.go:56] Launching server on :22624\n
Aug 17 13:30:23.504 E ns/openshift-cluster-node-tuning-operator pod/tuned-j564d node/ip-10-0-131-26.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 27:55.531899  122820 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0817 13:27:55.533428  122820 openshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0817 13:27:55.534546  122820 openshift-tuned.go:441] Getting recommended profile...\nI0817 13:27:55.697827  122820 openshift-tuned.go:635] Active profile () != recommended profile (openshift-control-plane)\nI0817 13:27:55.697879  122820 openshift-tuned.go:263] Starting tuned...\n2020-08-17 13:27:55,831 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-08-17 13:27:55,837 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-08-17 13:27:55,837 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-08-17 13:27:55,839 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-08-17 13:27:55,840 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-08-17 13:27:55,874 INFO     tuned.daemon.controller: starting controller\n2020-08-17 13:27:55,874 INFO     tuned.daemon.daemon: starting tuning\n2020-08-17 13:27:55,881 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-08-17 13:27:55,881 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-08-17 13:27:55,885 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-08-17 13:27:55,887 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-08-17 13:27:55,888 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-08-17 13:27:56,009 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-08-17 13:27:56,011 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0817 13:28:04.207123  122820 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-694c8d685c-469j9) labels changed node wide: true\n
Aug 17 13:30:24.051 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :28:05.709211       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3801, ErrCode=NO_ERROR, debug=""\nI0817 13:28:05.709351       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3801, ErrCode=NO_ERROR, debug=""\nI0817 13:28:05.709502       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3801, ErrCode=NO_ERROR, debug=""\nI0817 13:28:05.709684       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3801, ErrCode=NO_ERROR, debug=""\nI0817 13:28:05.710222       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3801, ErrCode=NO_ERROR, debug=""\nI0817 13:28:05.711350       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3801, ErrCode=NO_ERROR, debug=""\nI0817 13:28:05.711520       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3801, ErrCode=NO_ERROR, debug=""\nI0817 13:28:05.711667       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3801, ErrCode=NO_ERROR, debug=""\nI0817 13:28:05.711784       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3801, ErrCode=NO_ERROR, debug=""\nI0817 13:28:05.711933       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=3801, ErrCode=NO_ERROR, debug=""\nW0817 13:28:05.747150       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.129.220 10.0.156.63]\n
Aug 17 13:30:24.051 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0817 13:05:29.597494       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Aug 17 13:30:24.051 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0817 13:27:05.842960       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:27:05.843417       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0817 13:27:06.050566       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0817 13:27:06.050957       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Aug 17 13:30:29.888 E ns/openshift-monitoring pod/node-exporter-ct2ff node/ip-10-0-131-26.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:30:30.044 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-26.us-west-1.compute.internal node/ip-10-0-131-26.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): reader" not found, role.rbac.authorization.k8s.io "system:openshift:sa-listing-configmaps" not found]\nE0817 13:07:04.787502       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0817 13:07:32.991790       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod 9f8589ea-e6b7-4966-bcd0-21625270839e is not added to scheduler cache, so cannot be updated\nW0817 13:21:44.722913       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 22530 (31920)\nW0817 13:21:44.782160       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSINode ended with: too old resource version: 22531 (31920)\nW0817 13:25:01.080965       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: too old resource version: 22531 (35033)\nW0817 13:25:01.081204       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: too old resource version: 22530 (35033)\nE0817 13:25:31.099872       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod 9f8589ea-e6b7-4966-bcd0-21625270839e is not added to scheduler cache, so cannot be updated\nE0817 13:27:07.777101       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod 9f8589ea-e6b7-4966-bcd0-21625270839e is not added to scheduler cache, so cannot be updated\nE0817 13:27:13.555187       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod 9f8589ea-e6b7-4966-bcd0-21625270839e is not added to scheduler cache, so cannot be updated\nE0817 13:27:15.683431       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod 9f8589ea-e6b7-4966-bcd0-21625270839e is not added to scheduler cache, so cannot be updated\nE0817 13:27:17.908966       1 eventhandlers.go:288] scheduler cache UpdatePod failed: pod 9f8589ea-e6b7-4966-bcd0-21625270839e is not added to scheduler cache, so cannot be updated\n
Aug 17 13:30:31.014 E ns/openshift-multus pod/multus-zknp9 node/ip-10-0-131-26.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 17 13:30:34.393 E ns/openshift-machine-config-operator pod/machine-config-daemon-vgjz9 node/ip-10-0-131-26.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 17 13:30:34.475 E ns/openshift-multus pod/multus-zknp9 node/ip-10-0-131-26.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending