ResultSUCCESS
Tests 4 failed / 32 succeeded
Started2020-08-06 23:53
Elapsed1h25m
Work namespaceci-op-36b2y5sl
Refs release-4.3:1be922de
22:9b515de8
pod09b8033e-d840-11ea-84b9-0a580a820729
repoopenshift/images
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 39m2s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 3s of 34m35s (0%):

Aug 07 00:44:55.944 E ns/e2e-k8s-service-lb-available-9841 svc/service-test Service stopped responding to GET requests on reused connections
Aug 07 00:44:56.092 I ns/e2e-k8s-service-lb-available-9841 svc/service-test Service started responding to GET requests on reused connections
Aug 07 00:45:56.945 E ns/e2e-k8s-service-lb-available-9841 svc/service-test Service stopped responding to GET requests on reused connections
Aug 07 00:45:57.092 I ns/e2e-k8s-service-lb-available-9841 svc/service-test Service started responding to GET requests on reused connections
Aug 07 00:46:38.944 E ns/e2e-k8s-service-lb-available-9841 svc/service-test Service stopped responding to GET requests on reused connections
Aug 07 00:46:39.106 I ns/e2e-k8s-service-lb-available-9841 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1596762614.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 38m1s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 3m24s of 38m1s (9%):

Aug 07 00:43:43.563 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 00:43:43.861 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 00:43:59.563 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 00:43:59.865 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 00:44:55.563 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 00:44:55.854 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 00:44:57.563 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 00:44:57.853 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 00:45:28.563 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 00:45:28.849 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 00:45:30.563 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 00:45:30.868 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 00:45:34.563 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 00:45:35.563 - 999ms E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 07 00:45:36.924 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 00:47:02.563 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 00:47:02.563 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Aug 07 00:47:02.563 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 00:47:02.853 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Aug 07 00:47:02.856 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 00:47:02.862 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 00:55:27.563 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 00:55:28.563 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 00:55:29.563 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 00:55:29.928 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 00:55:37.563 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 00:55:37.873 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 00:55:38.563 - 19s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 07 00:55:40.563 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 00:55:41.563 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 00:55:48.563 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 00:55:48.865 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 00:55:50.891 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 00:55:57.882 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 00:55:59.563 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 00:55:59.876 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 00:56:04.563 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 00:56:04.876 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 00:58:27.749 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 00:58:28.563 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 00:58:38.091 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 00:58:48.091 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 00:58:48.563 - 39s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 00:58:48.977 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 00:58:49.563 - 38s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 00:58:55.563 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 00:58:55.884 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 00:59:25.057 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 00:59:25.376 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 00:59:28.928 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 00:59:28.945 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:01:54.563 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:01:54.853 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:01:55.563 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:01:56.563 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:02:03.563 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:02:03.871 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:02:05.563 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:02:05.854 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:02:05.855 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:02:16.563 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:02:17.563 - 18s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:02:23.563 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:02:24.563 - 12s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 01:02:24.563 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:02:25.563 - 9s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 07 01:02:34.911 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:02:37.101 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:02:37.105 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:03:07.563 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:03:07.902 I ns/openshift-console route/console Route started responding to GET requests on reused connections
				from junit_upgrade_1596762614.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 38m1s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 59s of 38m1s (3%):

Aug 07 00:45:01.334 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-36b2y5sl-77ea6.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Aug 07 00:45:01.410 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 00:55:55.334 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-36b2y5sl-77ea6.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Aug 07 00:55:55.413 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 00:56:15.334 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-36b2y5sl-77ea6.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded
Aug 07 00:56:15.412 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 00:59:54.334 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-36b2y5sl-77ea6.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Aug 07 00:59:54.413 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:00:13.334 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-36b2y5sl-77ea6.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Aug 07 01:00:13.414 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:00:23.491 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:00:24.333 - 10s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:00:35.858 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:00:41.923 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:00:42.000 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:00:44.996 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:00:45.334 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:00:48.148 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:00:51.139 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:00:51.219 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:00:54.211 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:00:54.295 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:00:57.283 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:00:57.333 - 3s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:01:00.434 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:01:03.428 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:01:04.333 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:01:04.412 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:01:06.499 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:01:06.580 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:01:12.644 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:01:13.333 - 8s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:01:21.936 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:01:28.515 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:01:28.596 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:01:31.587 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:01:31.668 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:01:37.731 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:01:38.333 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:01:40.882 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:01:43.875 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:01:44.333 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:01:47.024 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:03:31.827 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:03:32.333 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:03:33.296 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:03:51.334 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-36b2y5sl-77ea6.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Aug 07 01:03:52.333 - 29s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:04:21.413 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1596762614.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 39m7s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
193 error level events were detected during this test run:

Aug 07 00:34:47.723 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 508)
Aug 07 00:35:05.837 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-59d9c78d4f-kckc6 node/ip-10-0-133-167.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): : 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-129-90.us-west-1.compute.internal pods/kube-apiserver-ip-10-0-129-90.us-west-1.compute.internal container=\"kube-apiserver-6\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0807 00:31:42.033457       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"01948a90-08db-4e9e-93aa-0ffe0eaf87ed", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-6 -n openshift-kube-apiserver: cause by changes in data.status\nI0807 00:31:46.239932       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"01948a90-08db-4e9e-93aa-0ffe0eaf87ed", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-6-ip-10-0-129-90.us-west-1.compute.internal -n openshift-kube-apiserver because it was missing\nW0807 00:34:04.327522       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19929 (19969)\nW0807 00:34:48.594045       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20096 (20205)\nW0807 00:35:02.937942       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20205 (20294)\nI0807 00:35:04.723932       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 00:35:04.724123       1 leaderelection.go:66] leaderelection lost\n
Aug 07 00:36:30.251 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-6845dc84f5-42lgt node/ip-10-0-155-17.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): :6443: connect: connection refused\\nI0807 00:30:55.748397       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\\nF0807 00:30:55.748489       1 controllermanager.go:291] leaderelection lost\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-129-90.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-129-90.us-west-1.compute.internal container=\"kube-controller-manager-5\" is not ready"\nI0807 00:31:16.671602       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"8ebc2197-a265-4841-93c4-6f8a44a9fe6e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-129-90.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-129-90.us-west-1.compute.internal container=\"kube-controller-manager-5\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nW0807 00:34:04.328581       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19929 (19969)\nW0807 00:34:48.589309       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20096 (20205)\nW0807 00:35:02.937959       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20205 (20294)\nI0807 00:36:29.370307       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 00:36:29.370375       1 leaderelection.go:66] leaderelection lost\n
Aug 07 00:36:36.251 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-5b454885bf-9pfxs node/ip-10-0-155-17.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-08-07T00:22:16Z","message":"Progressing: 3 nodes are at revision 5","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-08-07T00:17:13Z","message":"Available: 3 nodes are active; 3 nodes are at revision 5","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-08-07T00:14:24Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0807 00:31:41.761285       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"31e18dd0-45fc-4a7a-b457-d25f230f2092", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-129-90.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-129-90.us-west-1.compute.internal container=\"scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nW0807 00:34:04.332301       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19929 (19969)\nW0807 00:34:48.591424       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20096 (20205)\nW0807 00:35:02.938383       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20205 (20294)\nI0807 00:36:35.520893       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 00:36:35.525132       1 leaderelection.go:66] leaderelection lost\n
Aug 07 00:37:15.242 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 37:14.546330       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 00:37:14.546341       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 00:37:14.546351       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 00:37:14.546360       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 00:37:14.546370       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 00:37:14.546379       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 00:37:14.546388       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 00:37:14.546397       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 00:37:14.546406       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 00:37:14.546416       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 00:37:14.546430       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 00:37:14.546452       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 00:37:14.546463       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 00:37:14.546473       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 00:37:14.546518       1 server.go:692] external host was not specified, using 10.0.133.167\nI0807 00:37:14.546722       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 00:37:14.547095       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 00:37:30.358 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 37:30.033991       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 00:37:30.034002       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 00:37:30.034012       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 00:37:30.034021       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 00:37:30.034031       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 00:37:30.034040       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 00:37:30.034049       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 00:37:30.034058       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 00:37:30.034068       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 00:37:30.034077       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 00:37:30.034091       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 00:37:30.034104       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 00:37:30.034116       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 00:37:30.034131       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 00:37:30.034175       1 server.go:692] external host was not specified, using 10.0.133.167\nI0807 00:37:30.034375       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 00:37:30.034664       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 00:37:53.369 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 37:53.021554       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 00:37:53.021565       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 00:37:53.021574       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 00:37:53.021585       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 00:37:53.021595       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 00:37:53.021606       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 00:37:53.021616       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 00:37:53.021625       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 00:37:53.021635       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 00:37:53.021645       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 00:37:53.021658       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 00:37:53.021671       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 00:37:53.021683       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 00:37:53.021696       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 00:37:53.021737       1 server.go:692] external host was not specified, using 10.0.133.167\nI0807 00:37:53.021878       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 00:37:53.022197       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 00:37:59.393 E ns/openshift-machine-api pod/machine-api-operator-66bc875f6b-qhvrg node/ip-10-0-133-167.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Aug 07 00:38:11.851 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0807 00:38:11.104919       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 00:38:11.107263       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 00:38:11.107330       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0807 00:38:11.108235       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 07 00:38:30.489 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): /localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=21483&timeout=8m27s&timeoutSeconds=507&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:38:29.769578       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=17667&timeout=6m27s&timeoutSeconds=387&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:38:29.770434       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=17672&timeout=5m47s&timeoutSeconds=347&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:38:29.772358       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=19161&timeout=7m43s&timeoutSeconds=463&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:38:29.773703       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=17672&timeout=6m56s&timeoutSeconds=416&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:38:29.775863       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=17667&timeout=8m38s&timeoutSeconds=518&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0807 00:38:30.024803       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0807 00:38:30.024853       1 server.go:264] leaderelection lost\n
Aug 07 00:38:31.571 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=kube-controller-manager-5 container exited with code 255 (Error): /client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/ingress.operator.openshift.io/v1/dnsrecords?allowWatchBookmarks=true&resourceVersion=18287&timeout=9m21s&timeoutSeconds=561&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:38:29.924036       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/console.openshift.io/v1/consolenotifications?allowWatchBookmarks=true&resourceVersion=18399&timeout=5m27s&timeoutSeconds=327&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:38:30.738766       1 reflector.go:280] github.com/openshift/client-go/authorization/informers/externalversions/factory.go:101: Failed to watch *v1.RoleBindingRestriction: Get https://localhost:6443/apis/authorization.openshift.io/v1/rolebindingrestrictions?allowWatchBookmarks=true&resourceVersion=18369&timeout=6m32s&timeoutSeconds=392&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:38:30.748649       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/alertmanagers?allowWatchBookmarks=true&resourceVersion=18286&timeout=7m29s&timeoutSeconds=449&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:38:30.749391       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/migration.k8s.io/v1alpha1/storageversionmigrations?allowWatchBookmarks=true&resourceVersion=18394&timeout=9m27s&timeoutSeconds=567&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0807 00:38:30.750524       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0807 00:38:30.750640       1 controllermanager.go:291] leaderelection lost\n
Aug 07 00:39:11.851 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0807 00:39:10.970323       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 00:39:10.974595       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 00:39:10.974634       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0807 00:39:10.976214       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 07 00:39:12.155 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :39:11.463337       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 00:39:11.463346       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 00:39:11.463355       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 00:39:11.463363       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 00:39:11.463371       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 00:39:11.463379       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 00:39:11.463388       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 00:39:11.463396       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 00:39:11.463404       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 00:39:11.463413       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 00:39:11.463426       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 00:39:11.463436       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 00:39:11.463447       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 00:39:11.463458       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 00:39:11.463503       1 server.go:692] external host was not specified, using 10.0.129.90\nI0807 00:39:11.463717       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 00:39:11.464111       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 00:39:24.265 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :39:23.209528       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 00:39:23.209538       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 00:39:23.209548       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 00:39:23.209558       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 00:39:23.209568       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 00:39:23.209577       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 00:39:23.209586       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 00:39:23.209596       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 00:39:23.209605       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 00:39:23.209617       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 00:39:23.209647       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 00:39:23.209658       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 00:39:23.209670       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 00:39:23.209682       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 00:39:23.209725       1 server.go:692] external host was not specified, using 10.0.129.90\nI0807 00:39:23.209863       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 00:39:23.210086       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 00:39:29.911 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0807 00:39:29.173928       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 00:39:29.175653       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 00:39:29.175730       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0807 00:39:29.176216       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 07 00:39:48.276 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :39:48.017263       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 00:39:48.017287       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 00:39:48.017304       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 00:39:48.017314       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 00:39:48.017323       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 00:39:48.017363       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 00:39:48.017382       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 00:39:48.017394       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 00:39:48.017403       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 00:39:48.017413       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 00:39:48.017429       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 00:39:48.017455       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 00:39:48.017473       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 00:39:48.017484       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 00:39:48.017555       1 server.go:692] external host was not specified, using 10.0.129.90\nI0807 00:39:48.017768       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 00:39:48.018034       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 00:40:30.854 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): tSeconds=563&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:40:29.865287       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/console.openshift.io/v1/consoleclidownloads?allowWatchBookmarks=true&resourceVersion=21828&timeout=8m49s&timeoutSeconds=529&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:40:29.866416       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://localhost:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=21930&timeout=7m23s&timeoutSeconds=443&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:40:29.867689       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operators.coreos.com/v1alpha1/subscriptions?allowWatchBookmarks=true&resourceVersion=21813&timeout=6m52s&timeoutSeconds=412&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:40:29.868722       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PriorityClass: Get https://localhost:6443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=19212&timeout=9m44s&timeoutSeconds=584&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0807 00:40:29.899767       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nI0807 00:40:29.899931       1 pv_protection_controller.go:93] Shutting down PV protection controller\nF0807 00:40:29.900001       1 controllermanager.go:291] leaderelection lost\nI0807 00:40:29.900036       1 replica_set.go:193] Shutting down replicationcontroller controller\nI0807 00:40:29.900053       1 attach_detach_controller.go:368] Shutting down attach detach controller\n
Aug 07 00:40:34.996 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0807 00:40:34.077503       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 00:40:34.082375       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0807 00:40:34.083049       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 07 00:40:42.070 E ns/openshift-cluster-machine-approver pod/machine-approver-7f564d9bc4-wln9k node/ip-10-0-133-167.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): sed\nE0807 00:38:29.737219       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0807 00:38:30.738479       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0807 00:38:31.739333       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0807 00:38:32.740318       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0807 00:38:33.741014       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0807 00:38:39.884673       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:openshift-cluster-machine-approver:machine-approver-sa" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope\n
Aug 07 00:40:48.003 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Could not update credentialsrequest "openshift-cloud-credential-operator/openshift-ingress" (203 of 508)\n* Could not update deployment "openshift-cloud-credential-operator/cloud-credential-operator" (142 of 508)\n* Could not update deployment "openshift-cluster-machine-approver/machine-approver" (223 of 508)\n* Could not update deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (229 of 508)\n* Could not update deployment "openshift-cluster-storage-operator/cluster-storage-operator" (274 of 508)\n* Could not update deployment "openshift-console/downloads" (326 of 508)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (240 of 508)\n* Could not update deployment "openshift-monitoring/cluster-monitoring-operator" (300 of 508)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (364 of 508)\n* Could not update deployment "openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator" (293 of 508)\n* Could not update service "openshift-authentication-operator/metrics" (151 of 508): the server has forbidden updates to this resource
Aug 07 00:40:50.297 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-65f79bbd47-dmddd node/ip-10-0-155-17.us-west-1.compute.internal container=operator container exited with code 255 (Error): rogressing"},{"lastTransitionTime":"2020-08-07T00:15:45Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-08-07T00:14:24Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0807 00:40:44.648867       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"0cf5eac7-db88-4419-ae5e-311e9e5b2a6e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: daemonset/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.")\nI0807 00:40:46.121888       1 reflector.go:241] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: forcing resync\nI0807 00:40:46.128194       1 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync\nI0807 00:40:46.128752       1 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync\nI0807 00:40:46.140160       1 reflector.go:241] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: forcing resync\nI0807 00:40:46.140176       1 reflector.go:241] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: forcing resync\nI0807 00:40:46.172167       1 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync\nI0807 00:40:46.404590       1 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync\nI0807 00:40:46.889298       1 reflector.go:241] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: forcing resync\nI0807 00:40:49.157395       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 00:40:49.157975       1 builder.go:217] server exited\n
Aug 07 00:40:52.157 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=cluster-policy-controller-6 container exited with code 255 (Error): I0807 00:40:51.551941       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 00:40:51.554683       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0807 00:40:51.555552       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\nI0807 00:40:51.555652       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Aug 07 00:41:01.339 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-7c59dfbd6c-8shpc node/ip-10-0-155-17.us-west-1.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): Map()\nI0807 00:31:03.885602       1 tuned_controller.go:320] syncDaemonSet()\nI0807 00:31:03.964461       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0807 00:31:03.964590       1 status.go:25] syncOperatorStatus()\nI0807 00:31:03.980740       1 tuned_controller.go:188] syncServiceAccount()\nI0807 00:31:03.980877       1 tuned_controller.go:215] syncClusterRole()\nI0807 00:31:04.029202       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0807 00:31:04.085343       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:31:04.091023       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:31:04.095812       1 tuned_controller.go:320] syncDaemonSet()\nI0807 00:31:04.105553       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0807 00:31:04.105577       1 status.go:25] syncOperatorStatus()\nI0807 00:31:04.114355       1 tuned_controller.go:188] syncServiceAccount()\nI0807 00:31:04.114502       1 tuned_controller.go:215] syncClusterRole()\nI0807 00:31:04.170433       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0807 00:31:04.307457       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:31:04.317408       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:31:04.329804       1 tuned_controller.go:320] syncDaemonSet()\nI0807 00:31:04.477817       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0807 00:31:04.477954       1 status.go:25] syncOperatorStatus()\nI0807 00:31:04.504538       1 tuned_controller.go:188] syncServiceAccount()\nI0807 00:31:04.504694       1 tuned_controller.go:215] syncClusterRole()\nI0807 00:31:04.562294       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0807 00:31:04.618862       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:31:04.623992       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:31:04.628100       1 tuned_controller.go:320] syncDaemonSet()\nF0807 00:41:00.353084       1 main.go:82] <nil>\n
Aug 07 00:41:12.487 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :41:11.813713       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 00:41:11.813723       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 00:41:11.813732       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 00:41:11.813741       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 00:41:11.813750       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 00:41:11.813760       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 00:41:11.813770       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 00:41:11.813779       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 00:41:11.813789       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 00:41:11.813799       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 00:41:11.813814       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 00:41:11.813826       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 00:41:11.813839       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 00:41:11.813852       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 00:41:11.813900       1 server.go:692] external host was not specified, using 10.0.155.17\nI0807 00:41:11.814139       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 00:41:11.814382       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 00:41:27.139 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-156-129.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/07 00:26:20 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 07 00:41:27.139 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-156-129.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/07 00:26:20 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/07 00:26:20 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 00:26:20 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 00:26:20 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/07 00:26:20 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 00:26:20 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/07 00:26:20 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 00:26:20 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/07 00:26:20 http.go:106: HTTPS: listening on [::]:9091\n2020/08/07 00:37:38 oauthproxy.go:774: basicauth: 10.129.0.30:47470 Authorization header does not start with 'Basic', skipping basic authentication\n
Aug 07 00:41:27.139 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-156-129.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-07T00:26:20.375511232Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-08-07T00:26:20.375626163Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-07T00:26:20.376910595Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-07T00:26:25.520765577Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 07 00:41:30.563 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :41:30.270557       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 00:41:30.270566       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 00:41:30.270575       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 00:41:30.270585       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 00:41:30.270594       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 00:41:30.270602       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 00:41:30.270610       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 00:41:30.270619       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 00:41:30.270627       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 00:41:30.270637       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 00:41:30.270653       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 00:41:30.270676       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 00:41:30.270687       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 00:41:30.270697       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 00:41:30.270738       1 server.go:692] external host was not specified, using 10.0.155.17\nI0807 00:41:30.270958       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 00:41:30.271253       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 00:41:42.914 E ns/openshift-monitoring pod/node-exporter-7nnqd node/ip-10-0-128-37.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:23:05Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:23:05Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:23:05Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:23:05Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:23:05Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:23:05Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:23:06Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 00:41:43.939 E ns/openshift-monitoring pod/openshift-state-metrics-6dfdfff777-cpm67 node/ip-10-0-128-37.us-west-1.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Aug 07 00:41:49.696 E ns/openshift-service-ca-operator pod/service-ca-operator-76fcb65f57-7sfkt node/ip-10-0-155-17.us-west-1.compute.internal container=operator container exited with code 255 (Error): 
Aug 07 00:42:01.318 E ns/openshift-controller-manager pod/controller-manager-xp9zk node/ip-10-0-133-167.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Aug 07 00:42:01.331 E ns/openshift-authentication-operator pod/authentication-operator-7f69c8d8c5-44shp node/ip-10-0-129-90.us-west-1.compute.internal container=operator container exited with code 255 (Error):  stream event decoding: unexpected EOF\nI0807 00:38:18.715593       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0807 00:38:18.715604       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0807 00:38:18.715613       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0807 00:38:18.747828       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0807 00:38:18.900518       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 18200 (18641)\nW0807 00:38:18.913837       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 18200 (18641)\nW0807 00:38:18.914042       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 18287 (21323)\nW0807 00:38:19.013217       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 18287 (21323)\nW0807 00:38:19.083673       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 18287 (21323)\nW0807 00:38:19.093923       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 18531 (18641)\nW0807 00:38:19.094104       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20690 (21323)\nW0807 00:41:16.340128       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 23640 (23675)\nI0807 00:42:00.445017       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 00:42:00.445460       1 leaderelection.go:66] leaderelection lost\n
Aug 07 00:42:05.930 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :42:02.308632       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 00:42:02.308642       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 00:42:02.308652       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 00:42:02.308662       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 00:42:02.308671       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 00:42:02.308680       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 00:42:02.308689       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 00:42:02.308699       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 00:42:02.308708       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 00:42:02.308718       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 00:42:02.308736       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 00:42:02.308754       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 00:42:02.308765       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 00:42:02.308776       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 00:42:02.308817       1 server.go:692] external host was not specified, using 10.0.155.17\nI0807 00:42:02.308984       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 00:42:02.309233       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 00:42:06.354 E ns/openshift-console-operator pod/console-operator-6df76f9579-bqk4k node/ip-10-0-129-90.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): ected EOF\nI0807 00:38:18.746021       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0807 00:38:18.746543       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0807 00:38:18.746971       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0807 00:38:18.751817       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0807 00:38:18.756382       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0807 00:38:18.756855       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0807 00:38:18.756927       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0807 00:38:18.913076       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 18287 (21323)\nW0807 00:38:19.000011       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 18200 (18641)\nW0807 00:38:19.000201       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20631 (21323)\nW0807 00:38:19.078248       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 18273 (19953)\nW0807 00:38:19.112227       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 18287 (21323)\nW0807 00:41:16.345270       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 23640 (23675)\nI0807 00:42:05.665431       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 00:42:05.666120       1 leaderelection.go:66] leaderelection lost\n
Aug 07 00:42:17.407 E ns/openshift-monitoring pod/thanos-querier-58dcbd44c-mk22q node/ip-10-0-156-129.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/08/07 00:25:21 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/07 00:25:21 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 00:25:21 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 00:25:21 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/07 00:25:21 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 00:25:21 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/07 00:25:21 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 00:25:21 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/07 00:25:21 http.go:106: HTTPS: listening on [::]:9091\n
Aug 07 00:42:19.212 E ns/openshift-monitoring pod/prometheus-adapter-6cdb95b575-qsqvx node/ip-10-0-128-37.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0807 00:23:55.982376       1 adapter.go:93] successfully using in-cluster auth\nI0807 00:23:56.321438       1 secure_serving.go:116] Serving securely on [::]:6443\n
Aug 07 00:42:29.531 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-156-129.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-07T00:41:44.509Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-07T00:41:44.513Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-07T00:41:44.514Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-07T00:41:44.518Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-07T00:41:44.518Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-07T00:41:44.519Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-07T00:41:44.523Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-07T00:41:44.523Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-07
Aug 07 00:42:30.006 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): refused\nE0807 00:42:29.138852       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/openshiftcontrollermanagers?allowWatchBookmarks=true&resourceVersion=25044&timeout=7m22s&timeoutSeconds=442&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:29.140025       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/images?allowWatchBookmarks=true&resourceVersion=16998&timeout=8m57s&timeoutSeconds=537&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:29.141066       1 reflector.go:280] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: Get https://localhost:6443/apis/template.openshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=25418&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:29.142329       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/alertmanagers?allowWatchBookmarks=true&resourceVersion=23676&timeout=9m3s&timeoutSeconds=543&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:29.143538       1 reflector.go:280] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: Get https://localhost:6443/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=22950&timeout=5m47s&timeoutSeconds=347&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0807 00:42:29.201377       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0807 00:42:29.201463       1 controllermanager.go:291] leaderelection lost\n
Aug 07 00:42:37.147 E ns/openshift-monitoring pod/grafana-86885f467-5bcrb node/ip-10-0-128-37.us-west-1.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Aug 07 00:42:37.398 E ns/openshift-monitoring pod/thanos-querier-58dcbd44c-ttxfl node/ip-10-0-138-213.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/08/07 00:25:15 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/07 00:25:15 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 00:25:15 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 00:25:15 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/07 00:25:15 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 00:25:15 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/07 00:25:15 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 00:25:15 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/07 00:25:15 http.go:106: HTTPS: listening on [::]:9091\n
Aug 07 00:42:39.152 E ns/openshift-marketplace pod/redhat-operators-f58cf5f5-rgkcw node/ip-10-0-128-37.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Aug 07 00:42:42.069 E ns/openshift-monitoring pod/node-exporter-747t4 node/ip-10-0-155-17.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:18:42Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:18:42Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 00:42:43.412 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-138-213.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/07 00:25:46 Watching directory: "/etc/alertmanager/config"\n
Aug 07 00:42:43.412 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-138-213.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/07 00:25:46 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/07 00:25:46 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 00:25:46 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 00:25:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/07 00:25:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 00:25:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/07 00:25:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 00:25:46 http.go:106: HTTPS: listening on [::]:9095\n
Aug 07 00:42:47.081 E ns/openshift-controller-manager pod/controller-manager-lht8f node/ip-10-0-155-17.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): 
Aug 07 00:42:48.539 E ns/openshift-monitoring pod/node-exporter-9klz9 node/ip-10-0-129-90.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:18:38Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:18:38Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 00:42:52.216 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-37.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-07T00:42:47.285Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-07T00:42:47.295Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-07T00:42:47.295Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-07T00:42:47.296Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-07T00:42:47.296Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-07T00:42:47.297Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-07T00:42:47.297Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-07T00:42:47.297Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-07T00:42:47.297Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-07T00:42:47.297Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-07T00:42:47.297Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-07T00:42:47.297Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-07T00:42:47.297Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-07T00:42:47.297Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-07T00:42:47.298Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-07T00:42:47.298Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-07
Aug 07 00:43:03.466 E ns/openshift-marketplace pod/certified-operators-5759c4d86b-gvztq node/ip-10-0-138-213.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Aug 07 00:43:04.162 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=scheduler container exited with code 255 (Error): rmers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0807 00:43:02.406668       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: unknown (get pods)\nE0807 00:43:02.406713       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0807 00:43:02.406830       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0807 00:43:02.406866       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)\nE0807 00:43:02.406896       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0807 00:43:02.406919       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: unknown (get csinodes.storage.k8s.io)\nW0807 00:43:02.424926       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: too old resource version: 25451 (26288)\nW0807 00:43:02.433432       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.StatefulSet ended with: too old resource version: 25524 (26294)\nW0807 00:43:02.440292       1 reflector.go:299] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: too old resource version: 21572 (26288)\nW0807 00:43:02.440433       1 reflector.go:299] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: too old resource version: 21572 (26288)\nE0807 00:43:03.526012       1 cache.go:431] Pod 761ce85d-25bb-4ac0-89dd-91d664b11c00 updated on a different node than previously added to.\nF0807 00:43:03.526038       1 cache.go:432] Schedulercache is corrupted and can badly affect scheduling decisions\n
Aug 07 00:43:09.268 E ns/openshift-marketplace pod/community-operators-dccdb9f56-v44wj node/ip-10-0-128-37.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Aug 07 00:43:13.644 E ns/openshift-service-ca pod/service-serving-cert-signer-6854b56c46-5k4j6 node/ip-10-0-129-90.us-west-1.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Aug 07 00:43:13.668 E ns/openshift-service-ca pod/apiservice-cabundle-injector-5fdffdb674-zjh8x node/ip-10-0-129-90.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Aug 07 00:43:34.328 E ns/openshift-console pod/console-9ccd8f5dc-s2mqq node/ip-10-0-155-17.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/08/7 00:26:51 cmd/main: cookies are secure!\n2020/08/7 00:26:51 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:27:01 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:27:11 cmd/main: Binding to [::]:8443...\n2020/08/7 00:27:11 cmd/main: using TLS\n
Aug 07 00:43:49.703 E ns/openshift-console pod/console-9ccd8f5dc-zf8dz node/ip-10-0-133-167.us-west-1.compute.internal container=console container exited with code 2 (Error): : 404 Not Found\n2020/08/7 00:25:04 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:25:14 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:25:24 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:25:34 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:25:44 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:25:54 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:26:04 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:26:14 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:26:24 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:26:34 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 00:26:44 cmd/main: Binding to [::]:8443...\n2020/08/7 00:26:44 cmd/main: using TLS\n
Aug 07 00:44:45.064 E ns/openshift-sdn pod/sdn-controller-rk8zh node/ip-10-0-129-90.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0807 00:13:46.796436       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 07 00:44:46.121 E ns/openshift-sdn pod/ovs-bns7z node/ip-10-0-129-90.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): he last 0 s (5 adds)\n2020-08-07T00:42:42.574Z|00378|connmgr|INFO|br0<->unix#1950: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:12.795Z|00379|connmgr|INFO|br0<->unix#1974: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:12.826Z|00380|connmgr|INFO|br0<->unix#1977: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:43:12.852Z|00381|bridge|INFO|bridge br0: deleted interface veth0af72743 on port 3\n2020-08-07T00:43:13.011Z|00382|connmgr|INFO|br0<->unix#1980: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:13.044Z|00383|connmgr|INFO|br0<->unix#1983: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:43:13.068Z|00384|bridge|INFO|bridge br0: deleted interface veth15a636b6 on port 4\n2020-08-07T00:43:23.489Z|00385|bridge|INFO|bridge br0: added interface veth330d5163 on port 64\n2020-08-07T00:43:23.525Z|00386|connmgr|INFO|br0<->unix#1995: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T00:43:23.566Z|00387|connmgr|INFO|br0<->unix#1998: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:27.511Z|00388|connmgr|INFO|br0<->unix#2004: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:27.545Z|00389|connmgr|INFO|br0<->unix#2007: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:43:27.578Z|00390|bridge|INFO|bridge br0: deleted interface vethe30b08ee on port 58\n2020-08-07T00:43:37.369Z|00391|bridge|INFO|bridge br0: added interface vethdd56752a on port 65\n2020-08-07T00:43:37.425Z|00392|connmgr|INFO|br0<->unix#2015: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T00:43:37.477Z|00393|connmgr|INFO|br0<->unix#2019: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:43.838Z|00394|connmgr|INFO|br0<->unix#2025: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:43.870Z|00395|connmgr|INFO|br0<->unix#2028: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:43:43.893Z|00396|bridge|INFO|bridge br0: deleted interface veth90a0f613 on port 40\n2020-08-07 00:44:45 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 00:44:54.338 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 00:45:07.966 E ns/openshift-sdn pod/sdn-controller-kj5v4 node/ip-10-0-133-167.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0807 00:13:49.786625       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 07 00:45:13.542 E ns/openshift-multus pod/multus-7g5jd node/ip-10-0-128-37.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 07 00:45:13.641 E ns/openshift-multus pod/multus-admission-controller-9wht8 node/ip-10-0-155-17.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Aug 07 00:45:19.564 E ns/openshift-sdn pod/ovs-2b9dp node/ip-10-0-128-37.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): ds in the last 0 s (4 deletes)\n2020-08-07T00:42:38.329Z|00186|bridge|INFO|bridge br0: deleted interface veth40f9f787 on port 8\n2020-08-07T00:42:39.465Z|00187|bridge|INFO|bridge br0: added interface veth87bb0d22 on port 33\n2020-08-07T00:42:39.537Z|00188|connmgr|INFO|br0<->unix#1158: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T00:42:39.609Z|00189|connmgr|INFO|br0<->unix#1161: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:42:42.512Z|00190|bridge|INFO|bridge br0: added interface veth5da5eef0 on port 34\n2020-08-07T00:42:42.555Z|00191|connmgr|INFO|br0<->unix#1165: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T00:42:42.616Z|00192|connmgr|INFO|br0<->unix#1168: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:08.249Z|00193|connmgr|INFO|br0<->unix#1192: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:08.276Z|00194|connmgr|INFO|br0<->unix#1195: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:43:08.299Z|00195|bridge|INFO|bridge br0: deleted interface vethda8f6f13 on port 7\n2020-08-07T00:43:08.801Z|00196|connmgr|INFO|br0<->unix#1198: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:08.831Z|00197|connmgr|INFO|br0<->unix#1201: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:43:08.856Z|00198|bridge|INFO|bridge br0: deleted interface vethd63b3541 on port 12\n2020-08-07T00:43:17.480Z|00199|connmgr|INFO|br0<->unix#1208: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:17.508Z|00200|connmgr|INFO|br0<->unix#1211: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:43:17.531Z|00201|bridge|INFO|bridge br0: deleted interface veth7684eb91 on port 3\n2020-08-07T00:43:24.404Z|00202|bridge|INFO|bridge br0: added interface veth2a426154 on port 35\n2020-08-07T00:43:24.433Z|00203|connmgr|INFO|br0<->unix#1219: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T00:43:24.471Z|00204|connmgr|INFO|br0<->unix#1222: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07 00:45:18 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 00:45:27.655 E ns/openshift-sdn pod/sdn-qsv8t node/ip-10-0-128-37.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ssing 0 service events\nI0807 00:44:26.113053    2044 proxier.go:350] userspace syncProxyRules took 26.787151ms\nI0807 00:44:42.577438    2044 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.6:6443 10.130.0.7:6443]\nI0807 00:44:42.577528    2044 roundrobin.go:218] Delete endpoint 10.129.0.13:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 00:44:42.745293    2044 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:44:42.745322    2044 proxier.go:350] userspace syncProxyRules took 29.078396ms\nI0807 00:45:12.893971    2044 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:45:12.894009    2044 proxier.go:350] userspace syncProxyRules took 38.223701ms\nI0807 00:45:26.660175    2044 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.6:6443 10.129.0.76:6443 10.130.0.7:6443]\nI0807 00:45:26.660208    2044 roundrobin.go:218] Delete endpoint 10.129.0.76:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 00:45:26.686784    2044 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.6:6443 10.129.0.76:6443]\nI0807 00:45:26.686843    2044 roundrobin.go:218] Delete endpoint 10.130.0.7:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 00:45:26.790301    2044 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:45:26.790329    2044 proxier.go:350] userspace syncProxyRules took 30.707732ms\nI0807 00:45:26.910258    2044 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:45:26.910288    2044 proxier.go:350] userspace syncProxyRules took 27.065155ms\nI0807 00:45:27.277710    2044 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 00:45:27.277768    2044 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 00:45:46.908 E ns/openshift-sdn pod/ovs-s7ptt node/ip-10-0-156-129.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): w_mods in the last 0 s (2 deletes)\n2020-08-07T00:42:37.407Z|00157|bridge|INFO|bridge br0: added interface veth8bb9a3f9 on port 26\n2020-08-07T00:42:37.456Z|00158|connmgr|INFO|br0<->unix#1088: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T00:42:37.518Z|00159|connmgr|INFO|br0<->unix#1091: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:42:42.463Z|00160|connmgr|INFO|br0<->unix#1097: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:42:42.500Z|00161|connmgr|INFO|br0<->unix#1100: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:42:42.529Z|00162|bridge|INFO|bridge br0: deleted interface vethdccfcfbb on port 8\n2020-08-07T00:45:13.445Z|00163|connmgr|INFO|br0<->unix#1213: 2 flow_mods in the last 0 s (2 adds)\n2020-08-07T00:45:13.496Z|00164|connmgr|INFO|br0<->unix#1217: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:13.555Z|00165|connmgr|INFO|br0<->unix#1225: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:13.704Z|00166|connmgr|INFO|br0<->unix#1228: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:13.727Z|00167|connmgr|INFO|br0<->unix#1231: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:13.751Z|00168|connmgr|INFO|br0<->unix#1234: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:13.775Z|00169|connmgr|INFO|br0<->unix#1237: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:13.809Z|00170|connmgr|INFO|br0<->unix#1240: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:13.838Z|00171|connmgr|INFO|br0<->unix#1243: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:13.867Z|00172|connmgr|INFO|br0<->unix#1246: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:13.904Z|00173|connmgr|INFO|br0<->unix#1249: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:13.935Z|00174|connmgr|INFO|br0<->unix#1252: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:13.972Z|00175|connmgr|INFO|br0<->unix#1255: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07 00:45:46 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 00:45:49.931 E ns/openshift-sdn pod/sdn-6764r node/ip-10-0-156-129.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 45:13.936821   66493 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:http" (:32725/tcp)\nI0807 00:45:13.937147   66493 proxier.go:1552] Opened local port "nodePort for e2e-k8s-service-lb-available-9841/service-test:" (:31593/tcp)\nI0807 00:45:13.937667   66493 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:https" (:31960/tcp)\nI0807 00:45:13.972275   66493 healthcheck.go:151] Opening healthcheck "openshift-ingress/router-default" on port 31269\nI0807 00:45:13.981846   66493 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0807 00:45:13.981883   66493 cmd.go:173] openshift-sdn network plugin registering startup\nI0807 00:45:13.981988   66493 cmd.go:177] openshift-sdn network plugin ready\nI0807 00:45:26.663196   66493 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.6:6443 10.129.0.76:6443 10.130.0.7:6443]\nI0807 00:45:26.663236   66493 roundrobin.go:218] Delete endpoint 10.129.0.76:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 00:45:26.691491   66493 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.6:6443 10.129.0.76:6443]\nI0807 00:45:26.691531   66493 roundrobin.go:218] Delete endpoint 10.130.0.7:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 00:45:26.780403   66493 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:45:26.780422   66493 proxier.go:350] userspace syncProxyRules took 26.531343ms\nI0807 00:45:26.895397   66493 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:45:26.895423   66493 proxier.go:350] userspace syncProxyRules took 28.287369ms\nI0807 00:45:48.824649   66493 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 00:45:48.824706   66493 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 00:46:08.207 E ns/openshift-sdn pod/ovs-2mc5j node/ip-10-0-133-167.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error):  in the last 0 s (1 deletes)\n2020-08-07T00:45:32.285Z|00374|connmgr|INFO|br0<->unix#2073: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:32.313Z|00375|connmgr|INFO|br0<->unix#2076: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:32.341Z|00376|connmgr|INFO|br0<->unix#2079: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:32.377Z|00377|connmgr|INFO|br0<->unix#2082: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:32.401Z|00378|connmgr|INFO|br0<->unix#2085: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:32.431Z|00379|connmgr|INFO|br0<->unix#2088: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:32.463Z|00380|connmgr|INFO|br0<->unix#2091: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:32.490Z|00381|connmgr|INFO|br0<->unix#2094: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:32.520Z|00382|connmgr|INFO|br0<->unix#2097: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:32.546Z|00383|connmgr|INFO|br0<->unix#2100: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:32.571Z|00384|connmgr|INFO|br0<->unix#2103: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:32.597Z|00385|connmgr|INFO|br0<->unix#2106: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:32.621Z|00386|connmgr|INFO|br0<->unix#2109: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:56.935Z|00387|connmgr|INFO|br0<->unix#2127: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:45:56.967Z|00388|connmgr|INFO|br0<->unix#2130: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:45:56.994Z|00389|bridge|INFO|bridge br0: deleted interface veth1b6d4305 on port 8\n2020-08-07T00:46:01.478Z|00390|bridge|INFO|bridge br0: added interface veth039dffd0 on port 62\n2020-08-07T00:46:01.522Z|00391|connmgr|INFO|br0<->unix#2136: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T00:46:01.570Z|00392|connmgr|INFO|br0<->unix#2139: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07 00:46:07 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 00:46:11.229 E ns/openshift-sdn pod/sdn-c8tcl node/ip-10-0-133-167.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 127   80399 healthcheck.go:151] Opening healthcheck "openshift-ingress/router-default" on port 31269\nI0807 00:45:32.585987   80399 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0807 00:45:32.586023   80399 cmd.go:173] openshift-sdn network plugin registering startup\nI0807 00:45:32.586124   80399 cmd.go:177] openshift-sdn network plugin ready\nI0807 00:45:57.006925   80399 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-p2jjf\nI0807 00:46:01.542072   80399 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-w784l got IP 10.130.0.61, ofport 62\nI0807 00:46:02.459824   80399 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:46:02.459851   80399 proxier.go:350] userspace syncProxyRules took 27.822958ms\nI0807 00:46:07.177372   80399 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.6:6443 10.129.0.76:6443 10.130.0.61:6443]\nI0807 00:46:07.177425   80399 roundrobin.go:218] Delete endpoint 10.130.0.61:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 00:46:07.208648   80399 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.76:6443 10.130.0.61:6443]\nI0807 00:46:07.208690   80399 roundrobin.go:218] Delete endpoint 10.128.0.6:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 00:46:07.360773   80399 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:46:07.360804   80399 proxier.go:350] userspace syncProxyRules took 39.339301ms\nI0807 00:46:07.509579   80399 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:46:07.509604   80399 proxier.go:350] userspace syncProxyRules took 29.398725ms\nI0807 00:46:10.192332   80399 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 00:46:10.192437   80399 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 00:46:29.860 E ns/openshift-sdn pod/ovs-gfg97 node/ip-10-0-138-213.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): ds in the last 0 s (2 deletes)\n2020-08-07T00:42:52.457Z|00139|connmgr|INFO|br0<->unix#1065: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:42:52.502Z|00140|connmgr|INFO|br0<->unix#1068: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:42:52.540Z|00141|bridge|INFO|bridge br0: deleted interface veth53aa1a9f on port 7\n2020-08-07T00:43:02.968Z|00142|connmgr|INFO|br0<->unix#1079: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:43:03.006Z|00143|connmgr|INFO|br0<->unix#1082: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:43:03.036Z|00144|bridge|INFO|bridge br0: deleted interface veth944eb5fc on port 5\n2020-08-07T00:46:07.202Z|00145|connmgr|INFO|br0<->unix#1217: 2 flow_mods in the last 0 s (2 adds)\n2020-08-07T00:46:07.264Z|00146|connmgr|INFO|br0<->unix#1221: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:46:07.322Z|00147|connmgr|INFO|br0<->unix#1229: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:46:07.459Z|00148|connmgr|INFO|br0<->unix#1232: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:46:07.495Z|00149|connmgr|INFO|br0<->unix#1235: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:46:07.532Z|00150|connmgr|INFO|br0<->unix#1238: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:46:07.572Z|00151|connmgr|INFO|br0<->unix#1241: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:46:07.603Z|00152|connmgr|INFO|br0<->unix#1245: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:46:07.639Z|00153|connmgr|INFO|br0<->unix#1249: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:46:07.670Z|00154|connmgr|INFO|br0<->unix#1252: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:46:07.703Z|00155|connmgr|INFO|br0<->unix#1255: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:46:07.730Z|00156|connmgr|INFO|br0<->unix#1258: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:46:07.752Z|00157|connmgr|INFO|br0<->unix#1261: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07 00:46:28 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 00:46:34.867 E ns/openshift-sdn pod/sdn-wwvrn node/ip-10-0-138-213.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ding new service port "openshift-cloud-credential-operator/controller-manager-service:" at 172.30.201.138:443/TCP\nI0807 00:46:07.522789   71957 service.go:357] Adding new service port "openshift-kube-scheduler-operator/metrics:https" at 172.30.131.153:443/TCP\nI0807 00:46:07.522807   71957 service.go:357] Adding new service port "openshift-monitoring/grafana:https" at 172.30.76.96:3000/TCP\nI0807 00:46:07.522824   71957 service.go:357] Adding new service port "openshift-machine-api/cluster-autoscaler-operator:https" at 172.30.19.170:443/TCP\nI0807 00:46:07.522842   71957 service.go:357] Adding new service port "openshift-machine-api/cluster-autoscaler-operator:metrics" at 172.30.19.170:9192/TCP\nI0807 00:46:07.523144   71957 proxier.go:731] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0807 00:46:07.635194   71957 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:46:07.635220   71957 proxier.go:350] userspace syncProxyRules took 111.779971ms\nI0807 00:46:07.683426   71957 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:https" (:31960/tcp)\nI0807 00:46:07.683895   71957 proxier.go:1552] Opened local port "nodePort for e2e-k8s-service-lb-available-9841/service-test:" (:31593/tcp)\nI0807 00:46:07.684011   71957 proxier.go:1552] Opened local port "nodePort for openshift-ingress/router-default:http" (:32725/tcp)\nI0807 00:46:07.715450   71957 healthcheck.go:151] Opening healthcheck "openshift-ingress/router-default" on port 31269\nI0807 00:46:07.723535   71957 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0807 00:46:07.723574   71957 cmd.go:173] openshift-sdn network plugin registering startup\nI0807 00:46:07.723680   71957 cmd.go:177] openshift-sdn network plugin ready\nI0807 00:46:34.480493   71957 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 00:46:34.480533   71957 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 00:46:52.967 E ns/openshift-sdn pod/ovs-hzzcm node/ip-10-0-155-17.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error):  s (1 deletes)\n2020-08-07T00:45:47.197Z|00486|connmgr|INFO|br0<->unix#2313: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:47.205Z|00487|connmgr|INFO|br0<->unix#2315: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:47.236Z|00488|connmgr|INFO|br0<->unix#2319: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:47.241Z|00489|connmgr|INFO|br0<->unix#2321: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:47.272Z|00490|connmgr|INFO|br0<->unix#2326: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:47.275Z|00491|connmgr|INFO|br0<->unix#2327: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:47.308Z|00492|connmgr|INFO|br0<->unix#2331: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:47.311Z|00493|connmgr|INFO|br0<->unix#2333: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:47.347Z|00494|connmgr|INFO|br0<->unix#2337: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:47.366Z|00495|connmgr|INFO|br0<->unix#2340: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:47.395Z|00496|connmgr|INFO|br0<->unix#2343: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:47.396Z|00497|connmgr|INFO|br0<->unix#2345: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:47.428Z|00498|connmgr|INFO|br0<->unix#2349: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:47.429Z|00499|connmgr|INFO|br0<->unix#2351: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:47.462Z|00500|connmgr|INFO|br0<->unix#2355: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T00:45:47.475Z|00501|connmgr|INFO|br0<->unix#2358: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:47.495Z|00502|connmgr|INFO|br0<->unix#2361: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T00:45:47.516Z|00503|connmgr|INFO|br0<->unix#2364: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T00:45:47.526Z|00504|connmgr|INFO|br0<->unix#2366: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07 00:46:52 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 00:47:03.010 E ns/openshift-sdn pod/sdn-fwwhk node/ip-10-0-155-17.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ring startup\nI0807 00:45:47.562976   85054 cmd.go:177] openshift-sdn network plugin ready\nI0807 00:46:07.179174   85054 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.6:6443 10.129.0.76:6443 10.130.0.61:6443]\nI0807 00:46:07.179214   85054 roundrobin.go:218] Delete endpoint 10.130.0.61:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 00:46:07.208587   85054 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.76:6443 10.130.0.61:6443]\nI0807 00:46:07.208625   85054 roundrobin.go:218] Delete endpoint 10.128.0.6:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 00:46:07.313086   85054 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:46:07.313110   85054 proxier.go:350] userspace syncProxyRules took 29.603272ms\nI0807 00:46:07.442147   85054 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:46:07.442173   85054 proxier.go:350] userspace syncProxyRules took 29.033795ms\nI0807 00:46:37.566732   85054 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:46:37.566761   85054 proxier.go:350] userspace syncProxyRules took 27.971595ms\nI0807 00:46:52.505253   85054 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.65:6443 10.129.0.76:6443 10.130.0.61:6443]\nI0807 00:46:52.505301   85054 roundrobin.go:218] Delete endpoint 10.128.0.65:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 00:46:52.654356   85054 proxier.go:371] userspace proxy: processing 0 service events\nI0807 00:46:52.654388   85054 proxier.go:350] userspace syncProxyRules took 38.167179ms\nI0807 00:47:01.984950   85054 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 00:47:01.984998   85054 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 00:47:03.982 E ns/openshift-multus pod/multus-42vzf node/ip-10-0-138-213.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 07 00:47:32.091 E ns/openshift-service-ca pod/apiservice-cabundle-injector-74d6d9594f-q2htx node/ip-10-0-155-17.us-west-1.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Aug 07 00:48:01.650 E ns/openshift-multus pod/multus-ns258 node/ip-10-0-133-167.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 07 00:48:58.436 E ns/openshift-multus pod/multus-82grg node/ip-10-0-155-17.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 07 00:49:45.164 E ns/openshift-multus pod/multus-jb677 node/ip-10-0-129-90.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 07 00:50:30.684 E ns/openshift-machine-config-operator pod/machine-config-operator-745b5b7d79-xr687 node/ip-10-0-155-17.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): 370 (24095)\nW0807 00:42:19.072264       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 18284 (24662)\nW0807 00:42:19.072335       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 18832 (21759)\nW0807 00:42:19.072845       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 18290 (24670)\nW0807 00:42:19.072968       1 reflector.go:299] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.CustomResourceDefinition ended with: too old resource version: 18051 (21758)\nW0807 00:42:19.073048       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 18292 (24928)\nW0807 00:42:19.074799       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 17672 (21761)\nW0807 00:42:19.075005       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 21662 (24336)\nW0807 00:42:19.209150       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 18290 (25566)\nW0807 00:42:19.249035       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 18288 (25566)\nW0807 00:42:19.297036       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 18288 (25567)\n
Aug 07 00:52:25.862 E ns/openshift-machine-config-operator pod/machine-config-daemon-tx5nt node/ip-10-0-156-129.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 00:52:40.527 E ns/openshift-machine-config-operator pod/machine-config-daemon-jr7hr node/ip-10-0-138-213.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 00:52:48.706 E ns/openshift-machine-config-operator pod/machine-config-daemon-82drk node/ip-10-0-129-90.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 00:53:04.196 E ns/openshift-machine-config-operator pod/machine-config-daemon-lmbvj node/ip-10-0-155-17.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 00:53:09.718 E ns/openshift-machine-config-operator pod/machine-config-daemon-cdnnj node/ip-10-0-128-37.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 00:53:18.569 E ns/openshift-machine-config-operator pod/machine-config-daemon-ss2xk node/ip-10-0-133-167.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 00:53:30.849 E ns/openshift-machine-config-operator pod/machine-config-controller-647cff6988-6w45j node/ip-10-0-129-90.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): 882)\nW0807 00:42:19.144696       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 17020 (23711)\nW0807 00:42:19.189976       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ContainerRuntimeConfig ended with: too old resource version: 18288 (25565)\nW0807 00:42:19.255170       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 17477 (25563)\nW0807 00:42:19.257807       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 18287 (25566)\nW0807 00:42:19.294526       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.KubeletConfig ended with: too old resource version: 18287 (25567)\nW0807 00:42:19.331322       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 18286 (25567)\nI0807 00:42:20.237937       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool master\nI0807 00:42:20.293057       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool worker\nW0807 00:47:49.253086       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 28500 (28827)\nW0807 00:47:52.024194       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterVersion ended with: too old resource version: 28827 (28845)\n
Aug 07 00:55:07.182 E ns/openshift-machine-config-operator pod/machine-config-server-9h2hx node/ip-10-0-129-90.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0807 00:15:06.295935       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0807 00:15:06.296771       1 api.go:56] Launching server on :22624\nI0807 00:15:06.296839       1 api.go:56] Launching server on :22623\nI0807 00:19:16.097000       1 api.go:102] Pool worker requested by 10.0.144.57:19206\n
Aug 07 00:55:17.247 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-156-129.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-07T00:41:44.509Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-07T00:41:44.513Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-07T00:41:44.514Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-07T00:41:44.518Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-07T00:41:44.518Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-07T00:41:44.518Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-07T00:41:44.519Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-07T00:41:44.523Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-07T00:41:44.523Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-07
Aug 07 00:55:17.479 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-74c8f5fbd7-8tqf8 node/ip-10-0-129-90.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): 27       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"71616f96-3d4b-4b1e-9dbd-41de3fc7b0f3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("Available: \"project.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"security.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)")\nI0807 00:47:18.365684       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"71616f96-3d4b-4b1e-9dbd-41de3fc7b0f3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nW0807 00:47:49.123315       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 28769 (28827)\nW0807 00:47:51.875457       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 28827 (28844)\nW0807 00:55:02.963026       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 31143 (31334)\nW0807 00:55:05.783206       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 31334 (31355)\nI0807 00:55:15.914200       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 00:55:15.914265       1 leaderelection.go:66] leaderelection lost\n
Aug 07 00:55:17.519 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-54b954ffcc-rpxrr node/ip-10-0-129-90.us-west-1.compute.internal container=operator container exited with code 255 (Error):  reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 1 items received\nW0807 00:55:03.024238       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 31143 (31337)\nI0807 00:55:04.024488       1 reflector.go:158] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0807 00:55:05.700435       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 0 items received\nW0807 00:55:05.782666       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 31337 (31355)\nI0807 00:55:05.798857       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0807 00:55:06.791572       1 reflector.go:158] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0807 00:55:14.752467       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0807 00:55:14.752498       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0807 00:55:14.757420       1 httplog.go:90] GET /metrics: (9.296636ms) 200 [Prometheus/2.14.0 10.128.2.32:48606]\nI0807 00:55:15.834090       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0807 00:55:15.834256       1 leaderelection.go:287] failed to renew lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock: failed to tryAcquireOrRenew context canceled\nF0807 00:55:15.834321       1 leaderelection.go:66] leaderelection lost\n
Aug 07 00:55:21.062 E ns/openshift-machine-config-operator pod/machine-config-server-sdb6j node/ip-10-0-133-167.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0807 00:15:06.316487       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0807 00:15:06.317723       1 api.go:56] Launching server on :22624\nI0807 00:15:06.317780       1 api.go:56] Launching server on :22623\n
Aug 07 00:55:38.917 E kube-apiserver failed contacting the API: Get https://api.ci-op-36b2y5sl-77ea6.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=32520&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp 52.52.134.193:6443: connect: connection refused
Aug 07 00:55:54.338 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 00:55:55.730 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-138-213.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-07T00:55:31.021Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-07T00:55:31.025Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-07T00:55:31.026Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-07T00:55:31.027Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-07T00:55:31.027Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-07T00:55:31.027Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-07T00:55:31.027Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-07T00:55:31.027Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-07T00:55:31.027Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-07T00:55:31.027Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-07T00:55:31.027Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-07T00:55:31.027Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-07T00:55:31.027Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-07T00:55:31.027Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-07T00:55:31.028Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-07T00:55:31.028Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-07
Aug 07 00:56:24.266 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Aug 07 00:56:38.194 E ns/openshift-marketplace pod/certified-operators-d9d4b6659-drjrz node/ip-10-0-128-37.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Aug 07 00:56:38.812 E ns/openshift-marketplace pod/community-operators-64ffcb44d6-qkth4 node/ip-10-0-138-213.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Aug 07 00:57:50.726 E ns/openshift-cluster-node-tuning-operator pod/tuned-w844t node/ip-10-0-156-129.us-west-1.compute.internal container=tuned container exited with code 143 (Error):  /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:52:14.480867   54133 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:52:14.637321   54133 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 00:52:35.665296   54133 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-tx5nt) labels changed node wide: true\nI0807 00:52:39.479250   54133 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:52:39.480856   54133 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:52:39.594553   54133 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 00:55:18.318545   54133 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-adapter-69474c4788-4ckp4) labels changed node wide: true\nI0807 00:55:19.479255   54133 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:55:19.480961   54133 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:55:19.592869   54133 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 00:55:25.683493   54133 openshift-tuned.go:550] Pod (openshift-monitoring/grafana-5cd7888ccc-drtz7) labels changed node wide: true\nI0807 00:55:29.479294   54133 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:55:29.480713   54133 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:55:29.593348   54133 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 00:55:38.721770   54133 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0807 00:55:38.725145   54133 openshift-tuned.go:881] Pod event watch channel closed.\nI0807 00:55:38.725170   54133 openshift-tuned.go:883] Increasing resyncPeriod to 108\n
Aug 07 00:57:50.751 E ns/openshift-monitoring pod/node-exporter-7dkhh node/ip-10-0-156-129.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:42:40Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:40Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 00:57:50.796 E ns/openshift-sdn pod/ovs-lvxfs node/ip-10-0-156-129.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): #526: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:16.678Z|00144|connmgr|INFO|br0<->unix#529: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:16.710Z|00145|bridge|INFO|bridge br0: deleted interface veth5ddff77c on port 12\n2020-08-07T00:55:16.785Z|00146|connmgr|INFO|br0<->unix#532: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:16.824Z|00147|connmgr|INFO|br0<->unix#536: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:16.856Z|00148|bridge|INFO|bridge br0: deleted interface veth98c32684 on port 10\n2020-08-07T00:55:16.905Z|00149|connmgr|INFO|br0<->unix#539: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:16.945Z|00150|connmgr|INFO|br0<->unix#542: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:16.976Z|00151|bridge|INFO|bridge br0: deleted interface vethd0196c2f on port 6\n2020-08-07T00:55:17.022Z|00152|connmgr|INFO|br0<->unix#545: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:17.060Z|00153|connmgr|INFO|br0<->unix#548: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:17.092Z|00154|bridge|INFO|bridge br0: deleted interface veth626349b5 on port 14\n2020-08-07T00:55:17.173Z|00155|connmgr|INFO|br0<->unix#551: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:17.216Z|00156|connmgr|INFO|br0<->unix#554: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:17.247Z|00157|bridge|INFO|bridge br0: deleted interface veth7009c55f on port 11\n2020-08-07T00:55:51.990Z|00021|jsonrpc|WARN|unix#524: receive error: Connection reset by peer\n2020-08-07T00:55:51.990Z|00022|reconnect|WARN|unix#524: connection dropped (Connection reset by peer)\n2020-08-07T00:56:01.355Z|00158|connmgr|INFO|br0<->unix#590: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:56:01.382Z|00159|connmgr|INFO|br0<->unix#593: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:56:01.403Z|00160|bridge|INFO|bridge br0: deleted interface veth9c01e00c on port 13\n info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 00:57:50.822 E ns/openshift-multus pod/multus-hlq8l node/ip-10-0-156-129.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 07 00:57:50.834 E ns/openshift-machine-config-operator pod/machine-config-daemon-cdt5v node/ip-10-0-156-129.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 00:57:56.839 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): I0807 00:38:32.963259       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 00:38:32.964801       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 00:38:32.964865       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0807 00:40:31.392425       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Aug 07 00:57:56.839 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:54:22.036939       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:54:22.037311       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:54:32.047733       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:54:32.048073       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:54:42.055539       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:54:42.056319       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:54:52.064972       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:54:52.065305       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:55:02.076865       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:55:02.077194       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:55:12.086006       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:55:12.086389       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:55:22.104503       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:55:22.104954       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:55:32.118037       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:55:32.118377       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Aug 07 00:57:56.839 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error): ynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\nI0807 00:40:31.531762       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\nI0807 00:40:31.531732       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nI0807 00:40:31.531906       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt\nE0807 00:40:31.532860       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0807 00:40:37.799126       1 webhook.go:107] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\nE0807 00:40:37.799163       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope]\nE0807 00:40:37.799633       1 webhook.go:107] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\nE0807 00:40:37.799654       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope]\n
Aug 07 00:57:56.890 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): flector.go:280] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=30708&timeout=7m9s&timeoutSeconds=429&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:55:38.736763       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=31792&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:55:38.741727       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=22883&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:55:38.742211       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32129&timeout=5m17s&timeoutSeconds=317&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:55:38.742459       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=32520&timeoutSeconds=362&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:55:38.742515       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=22884&timeout=9m47s&timeoutSeconds=587&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Aug 07 00:57:56.967 E ns/openshift-cluster-node-tuning-operator pod/tuned-5kmrk node/ip-10-0-129-90.us-west-1.compute.internal container=tuned container exited with code 143 (Error): penshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:55:20.350253   71215 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:55:20.493197   71215 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 00:55:20.493368   71215 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-5-ip-10-0-129-90.us-west-1.compute.internal) labels changed node wide: false\nI0807 00:55:20.532425   71215 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-2-ip-10-0-129-90.us-west-1.compute.internal) labels changed node wide: false\nI0807 00:55:20.757470   71215 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-6-ip-10-0-129-90.us-west-1.compute.internal) labels changed node wide: false\nI0807 00:55:20.985301   71215 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-3-ip-10-0-129-90.us-west-1.compute.internal) labels changed node wide: true\nI0807 00:55:25.341262   71215 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:55:25.342780   71215 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:55:25.470796   71215 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 00:55:26.745793   71215 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/olm-operator-54dc8cf7f8-t8kfh) labels changed node wide: true\nI0807 00:55:30.341324   71215 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:55:30.342823   71215 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:55:30.461505   71215 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 00:55:36.857964   71215 openshift-tuned.go:550] Pod (openshift-console/console-58c4c8cd76-c4txn) labels changed node wide: true\n
Aug 07 00:57:56.999 E ns/openshift-controller-manager pod/controller-manager-rxf42 node/ip-10-0-129-90.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Aug 07 00:57:57.033 E ns/openshift-multus pod/multus-zdp4n node/ip-10-0-129-90.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 07 00:57:57.049 E ns/openshift-sdn pod/ovs-fd76m node/ip-10-0-129-90.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): t 0 s (2 deletes)\n2020-08-07T00:55:18.375Z|00202|connmgr|INFO|br0<->unix#664: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:18.426Z|00203|bridge|INFO|bridge br0: deleted interface vethce9d6281 on port 12\n2020-08-07T00:55:18.915Z|00204|connmgr|INFO|br0<->unix#668: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:18.967Z|00205|connmgr|INFO|br0<->unix#671: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:19.005Z|00206|bridge|INFO|bridge br0: deleted interface veth2d33a230 on port 20\n2020-08-07T00:55:19.593Z|00207|connmgr|INFO|br0<->unix#674: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:19.645Z|00208|connmgr|INFO|br0<->unix#677: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:19.670Z|00209|bridge|INFO|bridge br0: deleted interface veth44d6c769 on port 14\n2020-08-07T00:55:21.677Z|00210|connmgr|INFO|br0<->unix#680: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:21.761Z|00211|connmgr|INFO|br0<->unix#683: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:21.843Z|00212|bridge|INFO|bridge br0: deleted interface veth6bb1e6e5 on port 22\n2020-08-07T00:55:22.146Z|00213|connmgr|INFO|br0<->unix#686: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:22.240Z|00214|connmgr|INFO|br0<->unix#691: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:22.334Z|00215|bridge|INFO|bridge br0: deleted interface vethbd2d32c6 on port 7\n2020-08-07T00:55:22.458Z|00216|connmgr|INFO|br0<->unix#694: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:22.586Z|00217|connmgr|INFO|br0<->unix#697: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:22.650Z|00218|bridge|INFO|bridge br0: deleted interface veth330d5163 on port 6\n2020-08-07T00:55:22.715Z|00219|connmgr|INFO|br0<->unix#700: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:55:22.833Z|00220|connmgr|INFO|br0<->unix#703: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:55:22.897Z|00221|bridge|INFO|bridge br0: deleted interface veth1fea4da2 on port 3\n2020-08-07 00:55:38 info: Saving flows ...\nTerminated\n
Aug 07 00:57:57.063 E ns/openshift-sdn pod/sdn-controller-6zssq node/ip-10-0-129-90.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0807 00:45:00.657544       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 07 00:57:57.089 E ns/openshift-monitoring pod/node-exporter-nzlfd node/ip-10-0-129-90.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:42:53Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:53Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 00:57:57.111 E ns/openshift-multus pod/multus-admission-controller-zzbrf node/ip-10-0-129-90.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Aug 07 00:57:57.199 E ns/openshift-machine-config-operator pod/machine-config-daemon-ntftn node/ip-10-0-129-90.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 00:57:57.219 E ns/openshift-machine-config-operator pod/machine-config-server-7qtqd node/ip-10-0-129-90.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0807 00:55:19.256929       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0807 00:55:19.259093       1 api.go:56] Launching server on :22624\nI0807 00:55:19.259515       1 api.go:56] Launching server on :22623\n
Aug 07 00:58:06.931 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Aug 07 00:58:08.658 E ns/openshift-multus pod/multus-hlq8l node/ip-10-0-156-129.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 00:58:12.699 E ns/openshift-machine-config-operator pod/machine-config-daemon-cdt5v node/ip-10-0-156-129.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 07 00:58:27.046 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): atch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:55:38.311956       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:55:38.312130       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:55:38.312163       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:55:38.312450       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:55:38.317726       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0807 00:55:38.625815       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0807 00:55:38.625817       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-129-90.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0807 00:55:38.641407       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=2301, ErrCode=NO_ERROR, debug=""\nI0807 00:55:38.641948       1 log.go:172] httputil: ReverseProxy read error during body copy: http2: server sent GOAWAY and closed the connection; LastStreamID=2301, ErrCode=NO_ERROR, debug=""\nI0807 00:55:38.650260       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io\nE0807 00:55:38.660586       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist\nI0807 00:55:38.660719       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.\nW0807 00:55:38.668028       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.133.167 10.0.155.17]\n
Aug 07 00:58:27.046 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): .go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0807 00:50:39.031984       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:50:39.032307       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nE0807 00:55:38.742188       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps?allowWatchBookmarks=true&resourceVersion=31045&timeout=5m41s&timeoutSeconds=341&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 00:55:38.742339       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?allowWatchBookmarks=true&resourceVersion=30200&timeout=5m19s&timeoutSeconds=319&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Aug 07 00:58:27.046 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-90.us-west-1.compute.internal node/ip-10-0-129-90.us-west-1.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0807 00:39:11.856116       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Aug 07 00:58:29.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-37.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/07 00:42:51 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 07 00:58:29.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-37.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/07 00:42:51 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/07 00:42:51 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 00:42:51 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 00:42:51 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/07 00:42:51 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 00:42:51 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/07 00:42:51 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 00:42:51 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/07 00:42:51 http.go:106: HTTPS: listening on [::]:9091\n2020/08/07 00:46:14 oauthproxy.go:774: basicauth: 10.128.2.25:42462 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/07 00:50:45 oauthproxy.go:774: basicauth: 10.128.2.25:47408 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/07 00:55:15 oauthproxy.go:774: basicauth: 10.128.2.25:52398 Authorization header does not start with 'Basic', skipping basic authentication\n
Aug 07 00:58:29.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-37.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-07T00:42:50.958528149Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-08-07T00:42:50.95864613Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-07T00:42:50.960089719Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-07T00:42:56.096028041Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 07 00:58:33.476 E ns/openshift-machine-config-operator pod/machine-config-daemon-ntftn node/ip-10-0-129-90.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 07 00:58:35.742 E ns/openshift-multus pod/multus-zdp4n node/ip-10-0-129-90.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 00:58:39.321 E ns/openshift-multus pod/multus-zdp4n node/ip-10-0-129-90.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 00:59:08.784 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-756f8f589d-429vv node/ip-10-0-133-167.us-west-1.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): Map()\nI0807 00:56:16.432218       1 tuned_controller.go:320] syncDaemonSet()\nI0807 00:56:41.353003       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0807 00:56:41.353142       1 status.go:25] syncOperatorStatus()\nI0807 00:56:41.395780       1 tuned_controller.go:188] syncServiceAccount()\nI0807 00:56:41.396014       1 tuned_controller.go:215] syncClusterRole()\nI0807 00:56:41.594765       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0807 00:56:41.681500       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:56:41.689091       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:56:41.695514       1 tuned_controller.go:320] syncDaemonSet()\nI0807 00:58:07.636200       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0807 00:58:07.636240       1 status.go:25] syncOperatorStatus()\nI0807 00:58:07.651117       1 tuned_controller.go:188] syncServiceAccount()\nI0807 00:58:07.651240       1 tuned_controller.go:215] syncClusterRole()\nI0807 00:58:07.732571       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0807 00:58:07.798363       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:58:07.805367       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:58:07.811159       1 tuned_controller.go:320] syncDaemonSet()\nI0807 00:58:32.341434       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0807 00:58:32.341466       1 status.go:25] syncOperatorStatus()\nI0807 00:58:32.355296       1 tuned_controller.go:188] syncServiceAccount()\nI0807 00:58:32.355414       1 tuned_controller.go:215] syncClusterRole()\nI0807 00:58:32.422842       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0807 00:58:32.502102       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:58:32.509751       1 tuned_controller.go:281] syncClusterConfigMap()\nI0807 00:58:32.516176       1 tuned_controller.go:320] syncDaemonSet()\nF0807 00:59:07.756788       1 main.go:82] <nil>\n
Aug 07 00:59:11.645 E ns/openshift-cluster-machine-approver pod/machine-approver-7d6447c6-bcxqp node/ip-10-0-133-167.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): 0:40:52.086663       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0807 00:40:52.086716       1 main.go:236] Starting Machine Approver\nI0807 00:40:52.187992       1 main.go:146] CSR csr-bz8nj added\nI0807 00:40:52.188016       1 main.go:149] CSR csr-bz8nj is already approved\nI0807 00:40:52.188041       1 main.go:146] CSR csr-jclpk added\nI0807 00:40:52.188050       1 main.go:149] CSR csr-jclpk is already approved\nI0807 00:40:52.188061       1 main.go:146] CSR csr-jgvmm added\nI0807 00:40:52.188069       1 main.go:149] CSR csr-jgvmm is already approved\nI0807 00:40:52.189100       1 main.go:146] CSR csr-wk4hk added\nI0807 00:40:52.189122       1 main.go:149] CSR csr-wk4hk is already approved\nI0807 00:40:52.189136       1 main.go:146] CSR csr-2kclf added\nI0807 00:40:52.189145       1 main.go:149] CSR csr-2kclf is already approved\nI0807 00:40:52.189160       1 main.go:146] CSR csr-6c8rx added\nI0807 00:40:52.189177       1 main.go:149] CSR csr-6c8rx is already approved\nI0807 00:40:52.189191       1 main.go:146] CSR csr-cjb9l added\nI0807 00:40:52.189199       1 main.go:149] CSR csr-cjb9l is already approved\nI0807 00:40:52.189209       1 main.go:146] CSR csr-rn7pc added\nI0807 00:40:52.189217       1 main.go:149] CSR csr-rn7pc is already approved\nI0807 00:40:52.189226       1 main.go:146] CSR csr-s5ndw added\nI0807 00:40:52.189232       1 main.go:149] CSR csr-s5ndw is already approved\nI0807 00:40:52.189260       1 main.go:146] CSR csr-w5rwx added\nI0807 00:40:52.189276       1 main.go:149] CSR csr-w5rwx is already approved\nI0807 00:40:52.189291       1 main.go:146] CSR csr-2b557 added\nI0807 00:40:52.189298       1 main.go:149] CSR csr-2b557 is already approved\nI0807 00:40:52.189305       1 main.go:146] CSR csr-2jctn added\nI0807 00:40:52.189310       1 main.go:149] CSR csr-2jctn is already approved\nW0807 00:55:39.412340       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 21759 (32532)\n
Aug 07 00:59:12.097 E ns/openshift-console pod/console-58c4c8cd76-49rzr node/ip-10-0-133-167.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020/08/7 00:55:28 cmd/main: cookies are secure!\n2020/08/7 00:55:29 cmd/main: Binding to [::]:8443...\n2020/08/7 00:55:29 cmd/main: using TLS\n
Aug 07 00:59:12.970 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-6675c4b96-42v7h node/ip-10-0-133-167.us-west-1.compute.internal container=operator container exited with code 255 (Error): paces/openshift-controller-manager/roles/prometheus-k8s\nI0807 00:58:20.373722       1 request.go:538] Throttling request took 196.509727ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0807 00:58:27.999512       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 0 items received\nI0807 00:58:30.484593       1 httplog.go:90] GET /metrics: (6.43107ms) 200 [Prometheus/2.14.0 10.131.0.28:56202]\nI0807 00:58:40.176365       1 request.go:538] Throttling request took 149.883035ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0807 00:58:40.376309       1 request.go:538] Throttling request took 196.397159ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0807 00:58:47.254123       1 httplog.go:90] GET /metrics: (5.44861ms) 200 [Prometheus/2.14.0 10.129.2.14:35044]\nI0807 00:59:00.174768       1 request.go:538] Throttling request took 156.573166ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0807 00:59:00.374759       1 request.go:538] Throttling request took 197.374381ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0807 00:59:00.483594       1 httplog.go:90] GET /metrics: (5.46267ms) 200 [Prometheus/2.14.0 10.131.0.28:56202]\nI0807 00:59:07.974922       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 0 items received\nI0807 00:59:11.822116       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 00:59:11.822178       1 leaderelection.go:66] leaderelection lost\nI0807 00:59:11.846388       1 config_observer_controller.go:159] Shutting down ConfigObserver\n
Aug 07 00:59:13.017 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7fb6f95967-fqqr9 node/ip-10-0-133-167.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error):        1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=31792&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp [::1]:6443: connect: connection refused\\nE0807 00:55:38.741727       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=22883&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp [::1]:6443: connect: connection refused\\nE0807 00:55:38.742211       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32129&timeout=5m17s&timeoutSeconds=317&watch=true: dial tcp [::1]:6443: connect: connection refused\\nE0807 00:55:38.742459       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=32520&timeoutSeconds=362&watch=true: dial tcp [::1]:6443: connect: connection refused\\nE0807 00:55:38.742515       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=22884&timeout=9m47s&timeoutSeconds=587&watch=true: dial tcp [::1]:6443: connect: connection refused\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-129-90.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-129-90.us-west-1.compute.internal container=\"scheduler\" is not ready"\nI0807 00:59:12.007314       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 00:59:12.007789       1 leaderelection.go:66] leaderelection lost\n
Aug 07 00:59:34.745 E ns/openshift-monitoring pod/prometheus-operator-7c8594988c-nrtk2 node/ip-10-0-129-90.us-west-1.compute.internal container=prometheus-operator container exited with code 1 (Error): ts=2020-08-07T00:59:34.015702436Z caller=main.go:199 msg="Starting Prometheus Operator version '0.34.0'."\nts=2020-08-07T00:59:34.222759243Z caller=main.go:96 msg="Staring insecure server on :8080"\nts=2020-08-07T00:59:34.230788628Z caller=main.go:315 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://172.30.0.1:443/version?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Aug 07 00:59:35.879 E ns/openshift-monitoring pod/cluster-monitoring-operator-565cc7b8b7-6c9vf node/ip-10-0-129-90.us-west-1.compute.internal container=cluster-monitoring-operator container exited with code 1 (Error): W0807 00:59:35.176005       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\n
Aug 07 01:00:39.338 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:01:04.503 E ns/openshift-cluster-node-tuning-operator pod/tuned-2rsp4 node/ip-10-0-128-37.us-west-1.compute.internal container=tuned container exited with code 143 (Error): hift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 00:58:33.663676   52795 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-68957f6c47-fqn7v) labels changed node wide: true\nI0807 00:58:36.758124   52795 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:58:36.759508   52795 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:58:36.872091   52795 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 00:58:43.698789   52795 openshift-tuned.go:550] Pod (openshift-monitoring/alertmanager-main-2) labels changed node wide: true\nI0807 00:58:46.758135   52795 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:58:46.759397   52795 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:58:46.874778   52795 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 00:59:03.652928   52795 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-9414/foo-5mq24) labels changed node wide: false\nI0807 00:59:03.676971   52795 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-9414/foo-zz4dk) labels changed node wide: true\nI0807 00:59:06.758127   52795 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:59:06.759862   52795 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:59:06.873023   52795 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 00:59:15.780738   52795 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-9841/service-test-gj6ph) labels changed node wide: true\nI0807 00:59:16.758163   52795 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:59:16.759964   52795 openshift-tuned.go:441] Getting recommended profile...\n
Aug 07 01:01:04.524 E ns/openshift-monitoring pod/node-exporter-s4hp8 node/ip-10-0-128-37.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:41:53Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:41:53Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 01:01:04.574 E ns/openshift-multus pod/multus-b7kh9 node/ip-10-0-128-37.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 07 01:01:04.594 E ns/openshift-sdn pod/ovs-ttjth node/ip-10-0-128-37.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): 8:29.177Z|00190|bridge|INFO|bridge br0: deleted interface veth5da5eef0 on port 19\n2020-08-07T00:58:29.236Z|00191|connmgr|INFO|br0<->unix#750: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:58:29.288Z|00192|connmgr|INFO|br0<->unix#753: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:58:29.338Z|00193|bridge|INFO|bridge br0: deleted interface veth813f67f7 on port 18\n2020-08-07T00:58:29.379Z|00194|connmgr|INFO|br0<->unix#756: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:58:29.415Z|00195|connmgr|INFO|br0<->unix#759: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:58:29.437Z|00196|bridge|INFO|bridge br0: deleted interface vethb8e545f8 on port 17\n2020-08-07T00:58:29.426Z|00024|jsonrpc|WARN|Dropped 9 log messages in last 774 seconds (most recently, 774 seconds ago) due to excessive rate\n2020-08-07T00:58:29.426Z|00025|jsonrpc|WARN|unix#689: receive error: Connection reset by peer\n2020-08-07T00:58:29.426Z|00026|reconnect|WARN|unix#689: connection dropped (Connection reset by peer)\n2020-08-07T00:58:57.774Z|00197|connmgr|INFO|br0<->unix#783: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:58:57.802Z|00198|connmgr|INFO|br0<->unix#786: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:58:57.823Z|00199|bridge|INFO|bridge br0: deleted interface veth8c443a11 on port 15\n2020-08-07T00:58:58.186Z|00200|connmgr|INFO|br0<->unix#789: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:58:58.213Z|00201|connmgr|INFO|br0<->unix#792: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:58:58.235Z|00202|bridge|INFO|bridge br0: deleted interface veth063e61ce on port 3\n2020-08-07T00:59:13.567Z|00203|connmgr|INFO|br0<->unix#807: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:59:13.595Z|00204|connmgr|INFO|br0<->unix#810: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:59:13.616Z|00205|bridge|INFO|bridge br0: deleted interface vethc9ec51ac on port 21\n2020-08-07 00:59:16 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:01:04.646 E ns/openshift-machine-config-operator pod/machine-config-daemon-lvwsv node/ip-10-0-128-37.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:01:22.064 E ns/openshift-multus pod/multus-b7kh9 node/ip-10-0-128-37.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:01:27.123 E ns/openshift-machine-config-operator pod/machine-config-daemon-lvwsv node/ip-10-0-128-37.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 07 01:01:32.151 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Grafana host: getting Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io grafana)
Aug 07 01:01:44.308 E ns/openshift-marketplace pod/community-operators-869899bb69-n7rlz node/ip-10-0-138-213.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Aug 07 01:01:44.429 E ns/openshift-marketplace pod/certified-operators-7fff7dc768-5trb5 node/ip-10-0-138-213.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Aug 07 01:01:45.323 E ns/openshift-monitoring pod/prometheus-adapter-69474c4788-dgkmg node/ip-10-0-138-213.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0807 00:55:25.570808       1 adapter.go:93] successfully using in-cluster auth\nI0807 00:55:26.941275       1 secure_serving.go:116] Serving securely on [::]:6443\n
Aug 07 01:01:48.543 E ns/openshift-monitoring pod/node-exporter-8kbwh node/ip-10-0-133-167.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:41:41Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:41:41Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 01:01:48.572 E ns/openshift-cluster-node-tuning-operator pod/tuned-c22mw node/ip-10-0-133-167.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 7 00:59:06.932906   70053 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-4-ip-10-0-133-167.us-west-1.compute.internal) labels changed node wide: false\nI0807 00:59:07.339673   70053 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-4-ip-10-0-133-167.us-west-1.compute.internal) labels changed node wide: false\nI0807 00:59:07.523052   70053 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-6-ip-10-0-133-167.us-west-1.compute.internal) labels changed node wide: false\nI0807 00:59:07.799954   70053 openshift-tuned.go:550] Pod (openshift-cloud-credential-operator/cloud-credential-operator-f6877cdcb-76jb8) labels changed node wide: true\nI0807 00:59:12.767463   70053 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:59:12.770881   70053 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:59:12.904257   70053 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 00:59:20.791947   70053 openshift-tuned.go:550] Pod (openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7fb6f95967-fqqr9) labels changed node wide: true\nI0807 00:59:22.767500   70053 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 00:59:22.770176   70053 openshift-tuned.go:441] Getting recommended profile...\nI0807 00:59:22.891426   70053 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 00:59:33.031106   70053 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-787874dd88-gzbbl) labels changed node wide: true\n2020-08-07 00:59:33,512 INFO     tuned.daemon.controller: terminating controller\n2020-08-07 00:59:33,513 INFO     tuned.daemon.daemon: stopping tuning\nI0807 00:59:33.512840   70053 openshift-tuned.go:137] Received signal: terminated\nI0807 00:59:33.512891   70053 openshift-tuned.go:304] Sending TERM to PID 70889\n
Aug 07 01:01:48.607 E ns/openshift-controller-manager pod/controller-manager-cwsmx node/ip-10-0-133-167.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Aug 07 01:01:48.676 E ns/openshift-sdn pod/sdn-controller-xhr9q node/ip-10-0-133-167.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0807 00:45:12.545148       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 07 01:01:48.709 E ns/openshift-multus pod/multus-admission-controller-w784l node/ip-10-0-133-167.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Aug 07 01:01:48.797 E ns/openshift-sdn pod/ovs-fqqmd node/ip-10-0-133-167.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): FO|bridge br0: deleted interface veth84c141d3 on port 14\n2020-08-07T00:59:11.412Z|00245|connmgr|INFO|br0<->unix#861: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:59:11.500Z|00246|connmgr|INFO|br0<->unix#864: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:59:11.548Z|00247|bridge|INFO|bridge br0: deleted interface veth4ca0c401 on port 28\n2020-08-07T00:59:11.784Z|00248|connmgr|INFO|br0<->unix#868: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:59:11.852Z|00249|connmgr|INFO|br0<->unix#872: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:59:11.963Z|00250|bridge|INFO|bridge br0: deleted interface veth2eee2040 on port 29\n2020-08-07T00:59:12.289Z|00251|connmgr|INFO|br0<->unix#875: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:59:12.333Z|00252|connmgr|INFO|br0<->unix#878: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:59:12.359Z|00253|bridge|INFO|bridge br0: deleted interface veth69e318ec on port 7\n2020-08-07T00:59:12.407Z|00254|connmgr|INFO|br0<->unix#881: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:59:11.898Z|00022|jsonrpc|WARN|unix#756: send error: Broken pipe\n2020-08-07T00:59:11.899Z|00023|reconnect|WARN|unix#756: connection dropped (Broken pipe)\n2020-08-07T00:59:12.471Z|00255|connmgr|INFO|br0<->unix#884: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:59:12.566Z|00256|bridge|INFO|bridge br0: deleted interface vethcd553523 on port 24\n2020-08-07T00:59:31.247Z|00257|connmgr|INFO|br0<->unix#900: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T00:59:31.275Z|00258|connmgr|INFO|br0<->unix#903: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T00:59:31.307Z|00259|bridge|INFO|bridge br0: deleted interface veth56c95aa5 on port 13\n2020-08-07T00:59:31.291Z|00024|jsonrpc|WARN|unix#784: receive error: Connection reset by peer\n2020-08-07T00:59:31.291Z|00025|reconnect|WARN|unix#784: connection dropped (Connection reset by peer)\n2020-08-07 00:59:33 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:01:48.827 E ns/openshift-multus pod/multus-kmn86 node/ip-10-0-133-167.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 07 01:01:48.908 E ns/openshift-machine-config-operator pod/machine-config-daemon-hqjkk node/ip-10-0-133-167.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:01:48.953 E ns/openshift-machine-config-operator pod/machine-config-server-6hwb6 node/ip-10-0-133-167.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0807 00:55:34.189207       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0807 00:55:34.190623       1 api.go:56] Launching server on :22624\nI0807 00:55:34.190685       1 api.go:56] Launching server on :22623\n
Aug 07 01:01:49.014 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): red revision has been compacted\nE0807 00:59:33.253798       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.253914       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.254017       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.254028       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.254115       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.254194       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.254945       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.254376       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.254459       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.327539       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.327710       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.327839       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 00:59:33.327944       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0807 00:59:33.493510       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-133-167.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0807 00:59:33.493681       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Aug 07 01:01:49.014 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0807 00:37:14.877640       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Aug 07 01:01:49.014 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0807 00:58:40.679204       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:58:40.679754       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0807 00:58:40.889472       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:58:40.889840       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Aug 07 01:01:49.039 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW0807 00:55:39.300951       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ResourceQuota ended with: too old resource version: 21759 (32531)\nW0807 00:55:39.344880       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Role ended with: too old resource version: 21759 (32531)\nW0807 00:55:39.345094       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.PodTemplate ended with: too old resource version: 21759 (32531)\nW0807 00:55:39.345235       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.Ingress ended with: too old resource version: 21759 (32531)\nW0807 00:55:39.372325       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 21759 (32531)\nW0807 00:55:39.383451       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.CronJob ended with: too old resource version: 21759 (32531)\nW0807 00:55:39.383787       1 reflector.go:289] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: watch of *v1.ClusterResourceQuota ended with: too old resource version: 21819 (32531)\nE0807 00:56:13.359362       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\nE0807 00:56:24.216494       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\nW0807 00:56:29.617987       1 reflector.go:289] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: The resourceVersion for the provided watch is too old.\n
Aug 07 01:01:49.039 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:58:22.319073       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:58:22.319449       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:58:32.331915       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:58:32.333234       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:58:42.340477       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:58:42.340911       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:58:52.348090       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:58:52.348544       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:59:02.359175       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:59:02.359527       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:59:12.370051       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:59:12.371082       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:59:22.380973       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:59:22.381327       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 00:59:32.425383       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 00:59:32.425737       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Aug 07 01:01:49.039 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error):  1\nI0807 00:59:26.604828       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on replicasets.apps "packageserver-57946bf9d4": the object has been modified; please apply your changes to the latest version and try again\nI0807 00:59:26.631406       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-57946bf9d4", UID:"479f1970-92eb-4b30-976c-33b534db5a32", APIVersion:"apps/v1", ResourceVersion:"35569", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-57946bf9d4-8glbb\nI0807 00:59:30.935527       1 replica_set.go:608] Too many replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-544b9d8f56, need 0, deleting 1\nI0807 00:59:30.935618       1 replica_set.go:226] Found 6 related ReplicaSets for ReplicaSet openshift-operator-lifecycle-manager/packageserver-544b9d8f56: packageserver-6d6995b85, packageserver-d58dc747d, packageserver-57946bf9d4, packageserver-6544cffbd8, packageserver-7997569559, packageserver-544b9d8f56\nI0807 00:59:30.935745       1 controller_utils.go:602] Controller packageserver-544b9d8f56 deleting pod openshift-operator-lifecycle-manager/packageserver-544b9d8f56-cz4rv\nI0807 00:59:30.948348       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-544b9d8f56", UID:"f92d035f-46c0-4526-8dbf-2a7594735e3a", APIVersion:"apps/v1", ResourceVersion:"35614", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-544b9d8f56-cz4rv\nI0807 00:59:31.047303       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"aefac2a1-97f8-4388-a29f-52ce2514f6c6", APIVersion:"apps/v1", ResourceVersion:"35580", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-544b9d8f56 to 0\n
Aug 07 01:01:49.056 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-133-167.us-west-1.compute.internal node/ip-10-0-133-167.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): I0807 00:59:18.086694       1 scheduler.go:667] pod openshift-marketplace/certified-operators-7fff7dc768-5trb5 is bound successfully on node "ip-10-0-138-213.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416940Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0807 00:59:18.388497       1 scheduler.go:667] pod openshift-marketplace/community-operators-6b9d978555-g9sx8 is bound successfully on node "ip-10-0-138-213.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416940Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0807 00:59:18.674069       1 scheduler.go:667] pod openshift-marketplace/community-operators-869899bb69-n7rlz is bound successfully on node "ip-10-0-138-213.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416940Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0807 00:59:24.956135       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7f5b675fdb-kmpfc: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0807 00:59:26.637413       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-57946bf9d4-8glbb is bound successfully on node "ip-10-0-155-17.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416940Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\n
Aug 07 01:02:27.827 E ns/openshift-multus pod/multus-kmn86 node/ip-10-0-133-167.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:02:57.371 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator monitoring is reporting a failure: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Alertmanager host: getting Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io alertmanager-main)
Aug 07 01:03:00.945 E ns/openshift-insights pod/insights-operator-b96df85d6-7j6g9 node/ip-10-0-155-17.us-west-1.compute.internal container=operator container exited with code 2 (Error): 0807 01:02:27.636447       1 diskrecorder.go:63] Recording config/configmaps/openshift-install/version with fingerprint=\nI0807 01:02:27.636454       1 diskrecorder.go:63] Recording config/configmaps/openshift-install-manifests/invoker with fingerprint=\nI0807 01:02:27.636460       1 diskrecorder.go:63] Recording config/configmaps/openshift-install-manifests/version with fingerprint=\nI0807 01:02:27.639812       1 diskrecorder.go:63] Recording config/version with fingerprint=\nI0807 01:02:27.639900       1 diskrecorder.go:63] Recording config/id with fingerprint=\nI0807 01:02:27.642988       1 diskrecorder.go:63] Recording config/infrastructure with fingerprint=\nI0807 01:02:27.648234       1 diskrecorder.go:63] Recording config/network with fingerprint=\nI0807 01:02:27.654221       1 diskrecorder.go:63] Recording config/authentication with fingerprint=\nI0807 01:02:27.659381       1 diskrecorder.go:63] Recording config/featuregate with fingerprint=\nI0807 01:02:27.664566       1 diskrecorder.go:63] Recording config/oauth with fingerprint=\nI0807 01:02:27.668224       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0807 01:02:27.671364       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0807 01:02:27.678404       1 diskrecorder.go:170] Writing 53 records to /var/lib/insights-operator/insights-2020-08-07-010227.tar.gz\nI0807 01:02:27.685759       1 diskrecorder.go:134] Wrote 53 records to disk in 7ms\nI0807 01:02:27.685784       1 periodic.go:151] Periodic gather config completed in 159ms\nI0807 01:02:43.701701       1 httplog.go:90] GET /metrics: (6.283184ms) 200 [Prometheus/2.14.0 10.129.2.14:49512]\nI0807 01:02:50.720973       1 configobserver.go:68] Refreshing configuration from cluster pull secret\nI0807 01:02:50.726729       1 configobserver.go:93] Found cloud.openshift.com token\nI0807 01:02:50.726756       1 configobserver.go:110] Refreshing configuration from cluster secret\nI0807 01:02:56.061276       1 httplog.go:90] GET /metrics: (5.661044ms) 200 [Prometheus/2.14.0 10.128.2.9:47072]\n
Aug 07 01:03:01.471 E ns/openshift-console-operator pod/console-operator-54cb95495d-nwj7g node/ip-10-0-155-17.us-west-1.compute.internal container=console-operator container exited with code 255 (Error):       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 36493 (36495)\nW0807 01:01:38.846838       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 36576 (36577)\nE0807 01:02:57.578843       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-2020-08-06-235641\nI0807 01:02:57.629591       1 status_controller.go:175] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-08-07T00:19:03Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-08-07T00:43:48Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-08-07T01:02:57Z","message":"DeploymentAvailable: 1 replicas ready at version 0.0.1-2020-08-06-235641","reason":"Deployment_FailedUpdate","status":"False","type":"Available"},{"lastTransitionTime":"2020-08-07T00:19:02Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0807 01:02:57.663959       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"dfb3f72a-3991-4012-81f5-0595491af7c5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from True to False ("DeploymentAvailable: 1 replicas ready at version 0.0.1-2020-08-06-235641")\nE0807 01:02:57.860953       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-2020-08-06-235641\nE0807 01:02:59.088284       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-2020-08-06-235641\nI0807 01:02:59.813355       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 01:02:59.813417       1 leaderelection.go:66] leaderelection lost\n
Aug 07 01:03:03.605 E ns/openshift-machine-api pod/machine-api-controllers-5649c6b74d-jqsk4 node/ip-10-0-155-17.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Aug 07 01:03:04.625 E ns/openshift-machine-api pod/machine-api-operator-84b84478f-xm4tx node/ip-10-0-155-17.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Aug 07 01:03:33.260 E kube-apiserver Kube API started failing: rpc error: code = Unavailable desc = etcdserver: leader changed
Aug 07 01:03:51.117 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: reconciling Grafana Service failed: retrieving Service object failed: etcdserver: request timed out
Aug 07 01:05:46.586 E ns/openshift-controller-manager pod/controller-manager-wckg5 node/ip-10-0-155-17.us-west-1.compute.internal container=controller-manager container exited with code 255 (Error): 
Aug 07 01:05:46.645 E ns/openshift-sdn pod/sdn-controller-dr69d node/ip-10-0-155-17.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): gMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"b1fcc564-2bc7-4985-b695-e34ed99cbf94", ResourceVersion:"27477", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732356025, loc:(*time.Location)(0x2b7dcc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-155-17\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-08-07T00:13:45Z\",\"renewTime\":\"2020-08-07T00:45:07Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-155-17 became leader'\nI0807 00:45:07.139492       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0807 00:45:07.145730       1 master.go:51] Initializing SDN master\nI0807 00:45:07.195067       1 network_controller.go:60] Started OpenShift Network Controller\nW0807 00:55:39.397230       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 24716 (32532)\nW0807 00:59:34.519328       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 24706 (32531)\nW0807 00:59:34.772976       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 21759 (35661)\n
Aug 07 01:05:46.663 E ns/openshift-multus pod/multus-admission-controller-qp888 node/ip-10-0-155-17.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Aug 07 01:05:46.693 E ns/openshift-sdn pod/ovs-thhbx node/ip-10-0-155-17.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): 0 s (2 deletes)\n2020-08-07T01:03:05.015Z|00289|connmgr|INFO|br0<->unix#1087: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:03:05.105Z|00290|bridge|INFO|bridge br0: deleted interface veth10e19d15 on port 21\n2020-08-07T01:03:05.175Z|00291|connmgr|INFO|br0<->unix#1090: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:03:05.319Z|00292|connmgr|INFO|br0<->unix#1093: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:03:05.362Z|00293|bridge|INFO|bridge br0: deleted interface veth50a82b5d on port 10\n2020-08-07T01:03:05.417Z|00294|connmgr|INFO|br0<->unix#1097: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:03:05.477Z|00295|connmgr|INFO|br0<->unix#1100: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:03:05.536Z|00296|bridge|INFO|bridge br0: deleted interface veth9fb8de2c on port 24\n2020-08-07T01:03:05.583Z|00297|connmgr|INFO|br0<->unix#1103: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:03:05.658Z|00298|connmgr|INFO|br0<->unix#1106: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:03:05.695Z|00299|bridge|INFO|bridge br0: deleted interface veth7e511a45 on port 35\n2020-08-07T01:03:05.747Z|00300|connmgr|INFO|br0<->unix#1109: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:03:05.858Z|00301|connmgr|INFO|br0<->unix#1112: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:03:05.922Z|00302|bridge|INFO|bridge br0: deleted interface veth6196e48c on port 7\n2020-08-07T01:03:22.720Z|00303|connmgr|INFO|br0<->unix#1130: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:03:22.758Z|00304|connmgr|INFO|br0<->unix#1133: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:03:22.787Z|00305|bridge|INFO|bridge br0: deleted interface vethff6fbb9b on port 37\n2020-08-07T01:03:22.771Z|00031|jsonrpc|WARN|unix#986: receive error: Connection reset by peer\n2020-08-07T01:03:22.771Z|00032|reconnect|WARN|unix#986: connection dropped (Connection reset by peer)\n2020-08-07 01:03:31 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:05:46.753 E ns/openshift-multus pod/multus-5pjh8 node/ip-10-0-155-17.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 07 01:05:46.790 E ns/openshift-machine-config-operator pod/machine-config-daemon-bh92b node/ip-10-0-155-17.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:05:46.825 E ns/openshift-machine-config-operator pod/machine-config-server-jf76d node/ip-10-0-155-17.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0807 00:55:41.519263       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0807 00:55:41.523265       1 api.go:56] Launching server on :22624\nI0807 00:55:41.526527       1 api.go:56] Launching server on :22623\n
Aug 07 01:05:46.848 E ns/openshift-cluster-node-tuning-operator pod/tuned-qsghc node/ip-10-0-155-17.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 40 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 01:03:03.119759  122340 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-5-ip-10-0-155-17.us-west-1.compute.internal) labels changed node wide: false\nI0807 01:03:03.256845  122340 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-155-17.us-west-1.compute.internal) labels changed node wide: false\nI0807 01:03:04.314961  122340 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/catalog-operator-68ff46fc45-74xcf) labels changed node wide: true\nI0807 01:03:07.630679  122340 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:03:07.632349  122340 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:03:07.768586  122340 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 01:03:10.941844  122340 openshift-tuned.go:550] Pod (openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator-54b954ffcc-mpbjn) labels changed node wide: true\nI0807 01:03:12.630729  122340 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:03:12.631963  122340 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:03:12.813608  122340 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 01:03:30.969853  122340 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-5c7d6d5c75-qx2w2) labels changed node wide: true\n2020-08-07 01:03:31,597 INFO     tuned.daemon.controller: terminating controller\n2020-08-07 01:03:31,598 INFO     tuned.daemon.daemon: stopping tuning\nI0807 01:03:31.598361  122340 openshift-tuned.go:137] Received signal: terminated\nI0807 01:03:31.598419  122340 openshift-tuned.go:304] Sending TERM to PID 122542\n
Aug 07 01:05:46.880 E ns/openshift-monitoring pod/node-exporter-2hsrb node/ip-10-0-155-17.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:42:47Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:47Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 01:05:46.985 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0807 00:41:12.244456       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Aug 07 01:05:46.985 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0807 01:03:03.275291       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:03:03.276225       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0807 01:03:03.512336       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:03:03.512726       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Aug 07 01:05:47.007 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): I0807 00:39:58.161527       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 00:39:58.163157       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 00:39:58.163292       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0807 00:42:21.910323       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:40.865577       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:54.965440       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Aug 07 01:05:47.007 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:02:45.039609       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:02:45.040065       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:02:55.050874       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:02:55.051211       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:03:03.246478       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:03:03.246949       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:03:03.290471       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:03:03.291026       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:03:03.291956       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:03:03.309901       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:03:05.113243       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:03:05.113635       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:03:15.117415       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:03:15.117713       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:03:25.147441       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:03:25.147939       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Aug 07 01:05:47.007 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error): t: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:35.353390       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0807 00:42:36.583850       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:39.938002       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:45.680419       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:49.107757       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0807 00:42:53.781543       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0807 00:43:02.258162       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Aug 07 01:05:47.027 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-155-17.us-west-1.compute.internal node/ip-10-0-155-17.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): 250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0807 01:03:06.460208       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-84cb4cb65-gpw56 is bound successfully on node "ip-10-0-133-167.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416940Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0807 01:03:07.310453       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7f5b675fdb-bbxm2: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0807 01:03:12.311413       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7f5b675fdb-bbxm2: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0807 01:03:30.710306       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-84cb4cb65-lqllb is bound successfully on node "ip-10-0-129-90.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416940Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265964Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0807 01:03:30.968956       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7f5b675fdb-bbxm2: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Aug 07 01:06:18.239 E ns/openshift-multus pod/multus-5pjh8 node/ip-10-0-155-17.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:06:24.047 E ns/openshift-machine-config-operator pod/machine-config-daemon-bh92b node/ip-10-0-155-17.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 07 01:06:26.565 E ns/openshift-multus pod/multus-5pjh8 node/ip-10-0-155-17.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:07:38.623 E ns/openshift-monitoring pod/node-exporter-dj6fs node/ip-10-0-138-213.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): porter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:42:02Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\ntime="2020-08-07T00:46:35Z" level=error msg="ERROR: netclass collector failed after 0.004818s: could not get net class info: error obtaining net class info: could not access file phys_port_id: no such device" source="collector.go:132"\n
Aug 07 01:07:38.667 E ns/openshift-sdn pod/ovs-d4gfv node/ip-10-0-138-213.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): in the last 0 s (2 deletes)\n2020-08-07T01:01:44.442Z|00215|connmgr|INFO|br0<->unix#937: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:01:44.474Z|00216|bridge|INFO|bridge br0: deleted interface vetheeecb3c0 on port 16\n2020-08-07T01:01:44.521Z|00217|connmgr|INFO|br0<->unix#940: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:01:44.558Z|00218|connmgr|INFO|br0<->unix#943: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:01:44.603Z|00219|bridge|INFO|bridge br0: deleted interface veth0d8867e8 on port 4\n2020-08-07T01:01:44.668Z|00220|connmgr|INFO|br0<->unix#946: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:01:44.713Z|00221|connmgr|INFO|br0<->unix#949: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:01:44.758Z|00222|bridge|INFO|bridge br0: deleted interface vethecde43a9 on port 13\n2020-08-07T01:01:44.798Z|00223|connmgr|INFO|br0<->unix#952: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:01:44.863Z|00224|connmgr|INFO|br0<->unix#955: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:01:44.897Z|00225|bridge|INFO|bridge br0: deleted interface veth0c25628c on port 18\n2020-08-07T01:02:12.301Z|00226|connmgr|INFO|br0<->unix#979: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:02:12.332Z|00227|connmgr|INFO|br0<->unix#982: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:02:12.355Z|00228|bridge|INFO|bridge br0: deleted interface veth19784397 on port 24\n2020-08-07T01:02:27.501Z|00229|connmgr|INFO|br0<->unix#994: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:02:27.529Z|00230|connmgr|INFO|br0<->unix#997: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:02:27.552Z|00231|bridge|INFO|bridge br0: deleted interface vethf06b9ac6 on port 3\n2020-08-07T01:03:06.976Z|00024|jsonrpc|WARN|unix#908: receive error: Connection reset by peer\n2020-08-07T01:03:06.976Z|00025|reconnect|WARN|unix#908: connection dropped (Connection reset by peer)\n2020-08-07 01:05:54 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:07:38.693 E ns/openshift-multus pod/multus-lvjj2 node/ip-10-0-138-213.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 07 01:07:38.709 E ns/openshift-machine-config-operator pod/machine-config-daemon-bn2gv node/ip-10-0-138-213.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:07:38.711 E ns/openshift-cluster-node-tuning-operator pod/tuned-6m4wd node/ip-10-0-138-213.us-west-1.compute.internal container=tuned container exited with code 143 (Error): NFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-08-07 01:02:22,576 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-08-07 01:02:22,577 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-08-07 01:02:22,579 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-08-07 01:02:22,682 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-08-07 01:02:22,683 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0807 01:02:23.624761  123459 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-9414/foo-x2f8d) labels changed node wide: true\nI0807 01:02:27.294507  123459 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:02:27.296839  123459 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:02:27.417818  123459 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 01:02:33.615758  123459 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-9841/service-test-ldzp8) labels changed node wide: true\nI0807 01:02:37.294508  123459 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:02:37.296100  123459 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:02:37.410490  123459 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 01:05:53.612894  123459 openshift-tuned.go:550] Pod (openshift-marketplace/certified-operators-7fff7dc768-5trb5) labels changed node wide: true\nI0807 01:05:54.197584  123459 openshift-tuned.go:137] Received signal: terminated\nI0807 01:05:54.197669  123459 openshift-tuned.go:304] Sending TERM to PID 123618\n2020-08-07 01:05:54,198 INFO     tuned.daemon.controller: terminating controller\n2020-08-07 01:05:54,198 INFO     tuned.daemon.daemon: stopping tuning\n
Aug 07 01:07:55.006 E ns/openshift-multus pod/multus-lvjj2 node/ip-10-0-138-213.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:08:00.039 E ns/openshift-machine-config-operator pod/machine-config-daemon-bn2gv node/ip-10-0-138-213.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error):