ResultSUCCESS
Tests 4 failed / 21 succeeded
Started2020-07-08 16:44
Elapsed1h33m
Work namespaceci-op-h0g8pbdd
Refs release-4.3:02be5758
221:921afda8
pod3074b676-c13a-11ea-8ee2-0a580a830073
repoopenshift/cloud-credential-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 39m19s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 14s of 35m5s (1%):

Jul 08 17:45:23.633 E ns/e2e-k8s-service-lb-available-1104 svc/service-test Service stopped responding to GET requests on reused connections
Jul 08 17:45:23.805 I ns/e2e-k8s-service-lb-available-1104 svc/service-test Service started responding to GET requests on reused connections
Jul 08 17:45:55.633 E ns/e2e-k8s-service-lb-available-1104 svc/service-test Service stopped responding to GET requests over new connections
Jul 08 17:45:56.633 - 11s   E ns/e2e-k8s-service-lb-available-1104 svc/service-test Service is not responding to GET requests over new connections
Jul 08 17:46:08.055 I ns/e2e-k8s-service-lb-available-1104 svc/service-test Service started responding to GET requests over new connections
Jul 08 17:46:32.633 E ns/e2e-k8s-service-lb-available-1104 svc/service-test Service stopped responding to GET requests on reused connections
Jul 08 17:46:32.829 I ns/e2e-k8s-service-lb-available-1104 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1594231732.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 38m18s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 3m30s of 38m17s (9%):

Jul 08 17:43:03.571 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 08 17:43:03.923 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 08 17:43:17.570 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 08 17:43:17.908 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 08 17:44:53.570 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 08 17:44:54.570 - 3s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 08 17:44:58.913 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 08 17:45:23.570 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 08 17:45:24.570 - 4s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 08 17:45:28.919 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 08 17:45:43.571 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 08 17:45:43.571 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Jul 08 17:45:43.908 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Jul 08 17:45:43.909 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 08 17:46:32.570 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 08 17:46:33.570 E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 08 17:46:33.570 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Jul 08 17:46:34.072 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 08 17:46:34.570 - 4s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Jul 08 17:46:34.742 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 08 17:46:35.570 - 3s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 08 17:46:38.927 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Jul 08 17:46:38.931 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 08 17:56:23.570 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 08 17:56:24.570 - 26s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 08 17:56:27.570 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 08 17:56:27.917 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 08 17:56:51.670 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 08 17:56:52.014 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 08 17:56:52.233 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 08 17:58:59.570 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 08 17:59:00.570 - 51s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 08 17:59:00.573 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Jul 08 17:59:01.570 - 50s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Jul 08 17:59:05.915 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 08 17:59:06.570 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 08 17:59:16.256 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 08 17:59:53.350 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Jul 08 17:59:53.350 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 08 18:02:08.888 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 08 18:02:09.570 - 17s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 08 18:02:14.570 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 08 18:02:14.919 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Jul 08 18:02:27.924 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 08 18:02:36.303 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Jul 08 18:02:36.570 - 10s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Jul 08 18:02:37.924 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Jul 08 18:02:38.570 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Jul 08 18:02:47.362 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Jul 08 18:02:47.366 I ns/openshift-console route/console Route started responding to GET requests over new connections
Jul 08 18:03:17.570 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Jul 08 18:03:17.925 I ns/openshift-console route/console Route started responding to GET requests on reused connections
				from junit_upgrade_1594231732.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 38m18s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1m31s of 38m18s (4%):

Jul 08 17:45:30.382 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 08 17:45:30.468 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 17:45:46.381 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 08 17:45:47.381 - 12s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 17:46:00.063 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 17:56:58.382 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 08 17:56:59.381 - 7s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 17:57:07.341 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 17:57:10.325 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 17:57:10.381 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 17:57:10.412 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:00:44.382 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 08 18:00:44.469 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:01:00.381 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 08 18:01:01.381 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:01:15.469 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:01:19.806 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:01:19.893 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:01:25.950 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:01:26.037 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:01:35.166 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:01:35.381 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:01:38.326 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:01:41.310 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:01:41.381 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:01:56.797 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:01:59.742 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:01:59.830 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:02:05.886 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:02:06.381 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:02:06.469 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:02:08.959 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:02:09.381 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:02:12.122 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:03:43.587 E kube-apiserver Kube API started failing: Get https://api.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 54.68.172.15:6443: connect: connection refused
Jul 08 18:03:44.381 E kube-apiserver Kube API is not responding to GET requests
Jul 08 18:03:44.468 I kube-apiserver Kube API started responding to GET requests
Jul 08 18:04:00.382 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 08 18:04:01.381 - 19s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:04:21.513 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:04:24.500 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:04:24.588 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:04:30.644 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:04:30.729 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:04:33.715 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:04:33.801 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:04:42.931 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:04:43.381 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:04:43.467 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:04:46.003 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:04:46.381 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:04:46.466 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:04:52.148 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:04:52.234 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:04:55.220 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:04:55.381 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:04:55.467 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:05:01.363 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:05:01.381 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:05:04.521 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:05:10.579 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:05:10.677 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:05:13.652 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:05:13.737 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:05:25.940 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:05:26.381 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:05:29.097 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:05:32.083 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:05:32.381 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:05:32.467 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:05:38.227 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:05:38.313 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:05:41.299 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:05:41.381 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:05:41.467 I openshift-apiserver OpenShift API started responding to GET requests
Jul 08 18:05:44.371 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Jul 08 18:05:44.381 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:05:50.602 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1594231732.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 39m24s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
191 error level events were detected during this test run:

Jul 08 17:32:55.355 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 508)
Jul 08 17:35:24.659 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-5ddf6d4586-xgnsl node/ip-10-0-128-27.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): s/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14756 (15182)\nW0708 17:28:03.011367       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 5827 (16142)\nW0708 17:28:03.011391       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 5589 (16233)\nW0708 17:28:03.011962       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 16103 (17041)\nW0708 17:28:03.012137       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14038 (15182)\nW0708 17:28:03.023118       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Role ended with: too old resource version: 12692 (15184)\nW0708 17:28:03.027133       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 10987 (15182)\nW0708 17:32:33.890898       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18840 (19161)\nW0708 17:32:56.333035       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19192 (19281)\nW0708 17:33:11.296868       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19340 (19386)\nI0708 17:35:23.832486       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0708 17:35:23.832532       1 leaderelection.go:66] leaderelection lost\nI0708 17:35:23.838167       1 config_observer_controller.go:159] Shutting down ConfigObserver\n
Jul 08 17:35:36.690 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-f97f45495-tnbmh node/ip-10-0-128-27.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): on":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-08T17:23:37Z","message":"Progressing: 3 nodes are at revision 5","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-08T17:17:33Z","message":"Available: 3 nodes are active; 3 nodes are at revision 5","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-08T17:14:50Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0708 17:29:07.003183       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"d69458b8-fa3c-46f1-bd7f-2e53438f912c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-128-27.us-west-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-128-27.us-west-2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nW0708 17:32:33.891291       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18840 (19161)\nW0708 17:32:56.332962       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19192 (19281)\nW0708 17:33:11.297342       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19340 (19386)\nI0708 17:35:35.660001       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0708 17:35:35.660111       1 leaderelection.go:66] leaderelection lost\nF0708 17:35:35.668286       1 builder.go:217] server exited\n
Jul 08 17:36:00.731 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 35:59.950742       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0708 17:35:59.950746       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0708 17:35:59.950750       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0708 17:35:59.950753       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0708 17:35:59.950757       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0708 17:35:59.950760       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0708 17:35:59.950764       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0708 17:35:59.950767       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0708 17:35:59.950771       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0708 17:35:59.950774       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0708 17:35:59.950796       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0708 17:35:59.950804       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0708 17:35:59.950809       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0708 17:35:59.950814       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0708 17:35:59.950837       1 server.go:692] external host was not specified, using 10.0.134.122\nI0708 17:35:59.950942       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0708 17:35:59.951084       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 08 17:36:21.883 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 36:21.398879       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0708 17:36:21.398883       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0708 17:36:21.398887       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0708 17:36:21.398891       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0708 17:36:21.398895       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0708 17:36:21.398898       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0708 17:36:21.398902       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0708 17:36:21.398905       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0708 17:36:21.398909       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0708 17:36:21.398912       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0708 17:36:21.398918       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0708 17:36:21.398923       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0708 17:36:21.398927       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0708 17:36:21.398931       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0708 17:36:21.398957       1 server.go:692] external host was not specified, using 10.0.134.122\nI0708 17:36:21.399066       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0708 17:36:21.399244       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 08 17:36:52.908 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 36:52.413808       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0708 17:36:52.413815       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0708 17:36:52.413822       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0708 17:36:52.413828       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0708 17:36:52.413834       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0708 17:36:52.413840       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0708 17:36:52.413846       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0708 17:36:52.413852       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0708 17:36:52.413858       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0708 17:36:52.413865       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0708 17:36:52.413875       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0708 17:36:52.413884       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0708 17:36:52.413891       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0708 17:36:52.413899       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0708 17:36:52.413930       1 server.go:692] external host was not specified, using 10.0.134.122\nI0708 17:36:52.414040       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0708 17:36:52.414216       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 08 17:37:11.930 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): 708 17:37:10.755729       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=18196&timeout=9m0s&timeoutSeconds=540&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:37:10.757787       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=20722&timeout=8m54s&timeoutSeconds=534&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:37:10.761460       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=18389&timeout=8m18s&timeoutSeconds=498&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:37:10.765923       1 reflector.go:280] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=19251&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:37:10.768500       1 reflector.go:280] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=19667&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0708 17:37:11.052964       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0708 17:37:11.052992       1 server.go:264] leaderelection lost\n
Jul 08 17:37:27.989 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=cluster-policy-controller-8 container exited with code 255 (Error): I0708 17:37:27.320410       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0708 17:37:27.322101       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0708 17:37:27.322379       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0708 17:37:27.322637       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\nE0708 17:37:27.324170       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Jul 08 17:38:23.255 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :38:22.552753       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0708 17:38:22.552760       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0708 17:38:22.552766       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0708 17:38:22.552772       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0708 17:38:22.552779       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0708 17:38:22.552785       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0708 17:38:22.552792       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0708 17:38:22.552798       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0708 17:38:22.552804       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0708 17:38:22.552811       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0708 17:38:22.552823       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0708 17:38:22.552831       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0708 17:38:22.552839       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0708 17:38:22.552847       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0708 17:38:22.552879       1 server.go:692] external host was not specified, using 10.0.128.27\nI0708 17:38:22.552997       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0708 17:38:22.553221       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 08 17:38:28.318 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=cluster-policy-controller-8 container exited with code 255 (Error): I0708 17:38:27.549767       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0708 17:38:27.551257       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0708 17:38:27.551808       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0708 17:38:27.551844       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 08 17:38:39.328 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :38:38.251608       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0708 17:38:38.251615       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0708 17:38:38.251622       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0708 17:38:38.251639       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0708 17:38:38.251645       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0708 17:38:38.251650       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0708 17:38:38.251656       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0708 17:38:38.251662       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0708 17:38:38.251668       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0708 17:38:38.251674       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0708 17:38:38.251684       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0708 17:38:38.251693       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0708 17:38:38.251701       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0708 17:38:38.251710       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0708 17:38:38.251741       1 server.go:692] external host was not specified, using 10.0.128.27\nI0708 17:38:38.251852       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0708 17:38:38.252014       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 08 17:38:43.344 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=cluster-policy-controller-8 container exited with code 255 (Error): I0708 17:38:43.206082       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0708 17:38:43.207378       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0708 17:38:43.207408       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0708 17:38:43.207921       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 08 17:39:00.391 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :39:00.258636       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0708 17:39:00.258640       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0708 17:39:00.258644       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0708 17:39:00.258648       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0708 17:39:00.258651       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0708 17:39:00.258655       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0708 17:39:00.258659       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0708 17:39:00.258662       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0708 17:39:00.258666       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0708 17:39:00.258671       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0708 17:39:00.258676       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0708 17:39:00.258681       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0708 17:39:00.258686       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0708 17:39:00.258690       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0708 17:39:00.258712       1 server.go:692] external host was not specified, using 10.0.128.27\nI0708 17:39:00.258814       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0708 17:39:00.258980       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 08 17:39:41.573 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): ent-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=21833&timeout=6m2s&timeoutSeconds=362&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:39:40.170099       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=18350&timeout=8m23s&timeoutSeconds=503&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:39:40.171072       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=17400&timeout=9m15s&timeoutSeconds=555&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:39:40.172223       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=17407&timeout=5m54s&timeoutSeconds=354&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:39:40.173355       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=18196&timeout=8m42s&timeoutSeconds=522&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:39:40.174339       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=21936&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0708 17:39:40.781099       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0708 17:39:40.781125       1 server.go:264] leaderelection lost\n
Jul 08 17:39:48.984 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=cluster-policy-controller-8 container exited with code 255 (Error): I0708 17:39:48.511549       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0708 17:39:48.519253       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0708 17:39:48.518961       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0708 17:39:48.520543       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 08 17:40:02.876 E ns/openshift-cluster-machine-approver pod/machine-approver-688f8d78b8-n9wkg node/ip-10-0-128-27.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): sts?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0708 17:39:36.150125       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0708 17:39:37.150647       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0708 17:39:38.151141       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0708 17:39:39.151706       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0708 17:39:40.152185       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0708 17:39:41.152662       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Jul 08 17:40:07.151 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=cluster-policy-controller-8 container exited with code 255 (Error): I0708 17:40:06.241613       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0708 17:40:06.243354       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0708 17:40:06.243889       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0708 17:40:06.244326       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 08 17:40:24.396 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-128-226.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/08 17:25:30 Watching directory: "/etc/alertmanager/config"\n
Jul 08 17:40:24.396 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-128-226.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/08 17:25:30 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/08 17:25:30 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/08 17:25:30 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/08 17:25:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/08 17:25:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/08 17:25:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/08 17:25:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/08 17:25:31 http.go:106: HTTPS: listening on [::]:9095\n
Jul 08 17:40:29.279 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 40:28.632776       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0708 17:40:28.632783       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0708 17:40:28.632789       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0708 17:40:28.632795       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0708 17:40:28.632801       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0708 17:40:28.632807       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0708 17:40:28.632814       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0708 17:40:28.632819       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0708 17:40:28.632826       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0708 17:40:28.632832       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0708 17:40:28.632844       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0708 17:40:28.632853       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0708 17:40:28.632900       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0708 17:40:28.632909       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0708 17:40:28.632966       1 server.go:692] external host was not specified, using 10.0.156.186\nI0708 17:40:28.633092       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0708 17:40:28.633662       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 08 17:40:39.165 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-226.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/07/08 17:27:37 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 08 17:40:39.165 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-226.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/07/08 17:27:38 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/08 17:27:38 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/08 17:27:38 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/08 17:27:38 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/08 17:27:38 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/08 17:27:38 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/08 17:27:38 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/08 17:27:38 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/08 17:27:38 http.go:106: HTTPS: listening on [::]:9091\n2020/07/08 17:31:00 oauthproxy.go:774: basicauth: 10.131.0.10:44326 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/08 17:35:30 oauthproxy.go:774: basicauth: 10.131.0.10:46650 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/08 17:40:01 oauthproxy.go:774: basicauth: 10.131.0.10:49238 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/08 17:40:19 oauthproxy.go:774: basicauth: 10.128.0.55:51226 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 08 17:40:39.165 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-226.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-08T17:27:37.62467436Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-07-08T17:27:37.624810258Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-07-08T17:27:37.626266736Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-08T17:27:42.737017655Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 08 17:40:40.199 E ns/openshift-monitoring pod/telemeter-client-5cf6cb99cc-gcwlf node/ip-10-0-128-226.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Jul 08 17:40:40.199 E ns/openshift-monitoring pod/telemeter-client-5cf6cb99cc-gcwlf node/ip-10-0-128-226.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Jul 08 17:40:44.250 E ns/openshift-monitoring pod/kube-state-metrics-9f86b764f-mnpxs node/ip-10-0-143-89.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Jul 08 17:40:44.261 E ns/openshift-monitoring pod/prometheus-adapter-85fcf5995c-kv9cg node/ip-10-0-143-89.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0708 17:21:50.914642       1 adapter.go:93] successfully using in-cluster auth\nI0708 17:21:51.479326       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 08 17:40:46.264 E ns/openshift-monitoring pod/openshift-state-metrics-74d859bc49-zsgh8 node/ip-10-0-143-89.us-west-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jul 08 17:40:51.346 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 40:50.332769       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0708 17:40:50.332773       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0708 17:40:50.332776       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0708 17:40:50.332780       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0708 17:40:50.332783       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0708 17:40:50.332787       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0708 17:40:50.332790       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0708 17:40:50.332794       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0708 17:40:50.332798       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0708 17:40:50.332801       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0708 17:40:50.332809       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0708 17:40:50.332814       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0708 17:40:50.332819       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0708 17:40:50.332825       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0708 17:40:50.332854       1 server.go:692] external host was not specified, using 10.0.156.186\nI0708 17:40:50.332984       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0708 17:40:50.333196       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 08 17:40:58.301 E ns/openshift-ingress pod/router-default-79cd59688b-5zwqc node/ip-10-0-143-89.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:40:11.279694       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:40:16.273783       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:40:21.274483       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:40:26.311632       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:40:31.315042       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:40:36.416898       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:40:41.276960       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:40:46.275437       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:40:51.270893       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:40:56.268443       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Jul 08 17:41:01.240 E ns/openshift-monitoring pod/prometheus-adapter-85fcf5995c-pz9w9 node/ip-10-0-128-226.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0708 17:21:58.051810       1 adapter.go:93] successfully using in-cluster auth\nI0708 17:21:59.185080       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 08 17:41:02.259 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-226.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-07-08T17:40:45.950Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-08T17:40:45.960Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-08T17:40:45.961Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-08T17:40:45.963Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-08T17:40:45.963Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-07-08T17:40:45.963Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-08T17:40:45.963Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-08T17:40:45.963Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-08T17:40:45.963Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-08T17:40:45.963Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-08T17:40:45.963Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-08T17:40:45.963Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-07-08T17:40:45.963Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-08T17:40:45.963Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-08T17:40:45.964Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-08T17:40:45.964Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-07-08
Jul 08 17:41:05.277 E ns/openshift-marketplace pod/redhat-operators-78bb8c6c84-dnxtx node/ip-10-0-128-226.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jul 08 17:41:07.324 E ns/openshift-monitoring pod/thanos-querier-5d8594d887-wsk85 node/ip-10-0-128-226.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/07/08 17:27:17 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/08 17:27:17 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/08 17:27:17 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/08 17:27:17 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/08 17:27:17 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/08 17:27:17 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/08 17:27:17 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/08 17:27:17 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/08 17:27:17 http.go:106: HTTPS: listening on [::]:9091\n
Jul 08 17:41:17.351 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-143-89.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/08 17:26:00 Watching directory: "/etc/alertmanager/config"\n
Jul 08 17:41:17.351 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-143-89.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/08 17:26:00 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/08 17:26:00 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/08 17:26:00 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/08 17:26:00 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/08 17:26:00 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/08 17:26:00 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/08 17:26:00 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/08 17:26:00 http.go:106: HTTPS: listening on [::]:9095\n
Jul 08 17:41:18.419 E ns/openshift-console-operator pod/console-operator-8549cd66b8-c92zl node/ip-10-0-128-27.us-west-2.compute.internal container=console-operator container exited with code 255 (Error): oding: unexpected EOF\nI0708 17:37:00.729154       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0708 17:37:00.729320       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0708 17:37:00.729328       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0708 17:37:00.729349       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0708 17:37:00.729359       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0708 17:37:00.729368       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0708 17:37:00.871825       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 19649 (20487)\nW0708 17:37:00.871898       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20322 (20487)\nW0708 17:37:00.871946       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 19768 (20487)\nW0708 17:37:00.955837       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 17319 (20488)\nW0708 17:39:35.179692       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 22004 (22043)\nW0708 17:39:37.818257       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 22043 (22063)\nI0708 17:41:17.430173       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0708 17:41:17.430684       1 leaderelection.go:66] leaderelection lost\nF0708 17:41:17.430725       1 builder.go:217] server exited\n
Jul 08 17:41:22.476 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 41:22.272882       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0708 17:41:22.272889       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0708 17:41:22.272895       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0708 17:41:22.272902       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0708 17:41:22.272908       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0708 17:41:22.272914       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0708 17:41:22.272920       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0708 17:41:22.272926       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0708 17:41:22.272932       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0708 17:41:22.272938       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0708 17:41:22.272949       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0708 17:41:22.272960       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0708 17:41:22.272968       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0708 17:41:22.272975       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0708 17:41:22.273008       1 server.go:692] external host was not specified, using 10.0.156.186\nI0708 17:41:22.273121       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0708 17:41:22.273325       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jul 08 17:41:24.001 E ns/openshift-controller-manager pod/controller-manager-gkdqw node/ip-10-0-134-122.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Jul 08 17:41:32.509 E ns/openshift-service-ca pod/configmap-cabundle-injector-5cbd977dff-z6xqw node/ip-10-0-156-186.us-west-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Jul 08 17:41:32.540 E ns/openshift-service-ca pod/service-serving-cert-signer-7586cdd585-pmtj5 node/ip-10-0-156-186.us-west-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Jul 08 17:41:33.413 E ns/openshift-marketplace pod/certified-operators-56cf7c7d75-tzkft node/ip-10-0-128-226.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 08 17:41:34.584 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-157-141.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-07-08T17:41:22.463Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-08T17:41:22.465Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-08T17:41:22.466Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-08T17:41:22.467Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-08T17:41:22.467Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-07-08T17:41:22.467Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-08T17:41:22.467Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-08T17:41:22.467Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-08T17:41:22.467Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-08T17:41:22.467Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-08T17:41:22.467Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-08T17:41:22.467Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-08T17:41:22.467Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-08T17:41:22.467Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-07-08T17:41:22.468Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-08T17:41:22.468Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-07-08
Jul 08 17:41:47.550 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): ::1]:6443: connect: connection refused\nE0708 17:41:45.840417       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=25182&timeout=8m16s&timeoutSeconds=496&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:41:45.841649       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=18389&timeout=7m7s&timeoutSeconds=427&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:41:45.842637       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=25452&timeoutSeconds=554&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:41:45.843814       1 reflector.go:280] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=20956&timeout=5m42s&timeoutSeconds=342&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 17:41:45.844881       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=15191&timeout=7m40s&timeoutSeconds=460&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0708 17:41:46.797444       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0708 17:41:46.797472       1 server.go:264] leaderelection lost\n
Jul 08 17:41:59.522 E ns/openshift-monitoring pod/node-exporter-r45wt node/ip-10-0-128-27.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-08T17:20:39Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-08T17:20:39Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 08 17:42:06.608 E ns/openshift-monitoring pod/node-exporter-2vhk6 node/ip-10-0-156-186.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-08T17:20:37Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-08T17:20:37Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 08 17:42:08.558 E ns/openshift-controller-manager pod/controller-manager-thrpl node/ip-10-0-128-27.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Jul 08 17:42:53.756 E ns/openshift-console pod/console-8fc7dfcdb-x7d72 node/ip-10-0-156-186.us-west-2.compute.internal container=console container exited with code 2 (Error): : 404 Not Found\n2020/07/8 17:22:34 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:22:44 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:22:54 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:04 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:14 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:24 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:34 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:44 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:54 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:24:04 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:24:14 cmd/main: Binding to [::]:8443...\n2020/07/8 17:24:14 cmd/main: using TLS\n
Jul 08 17:42:57.770 E ns/openshift-controller-manager pod/controller-manager-v8j2n node/ip-10-0-156-186.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Jul 08 17:43:08.353 E ns/openshift-console pod/console-8fc7dfcdb-c9n26 node/ip-10-0-134-122.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/07/8 17:22:39 cmd/main: cookies are secure!\n2020/07/8 17:22:39 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:22:50 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:05 auth: error contacting auth provider (retrying in 10s): Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/07/8 17:23:15 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:25 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:35 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:45 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:23:55 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:24:05 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/07/8 17:24:15 cmd/main: Binding to [::]:8443...\n2020/07/8 17:24:15 cmd/main: using TLS\n
Jul 08 17:44:54.077 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-dc8577b8c-77wr9 node/ip-10-0-156-186.us-west-2.compute.internal container=manager container exited with code 1 (Error): msg="ignoring cr as it is for a different cloud" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-07-08T17:41:36Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-07-08T17:41:36Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-07-08T17:41:36Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-07-08T17:41:36Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-07-08T17:41:36Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-07-08T17:41:36Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-07-08T17:41:36Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-07-08T17:41:36Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-07-08T17:42:32Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-07-08T17:42:32Z" level=info msg="reconcile complete" controller=metrics elapsed=1.315981ms\ntime="2020-07-08T17:44:32Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-07-08T17:44:32Z" level=info msg="reconcile complete" controller=metrics elapsed=1.369111ms\ntime="2020-07-08T17:44:53Z" level=error msg="leader election lostunable to run the manager"\n
Jul 08 17:44:58.610 E ns/openshift-sdn pod/sdn-controller-wm55m node/ip-10-0-134-122.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): rk/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 9339 (16142)\nW0708 17:25:43.366245       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 10467 (16142)\nI0708 17:29:32.996901       1 vnids.go:115] Allocated netid 12113252 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-9773"\nI0708 17:29:33.005442       1 vnids.go:115] Allocated netid 1579705 for namespace "e2e-frontend-ingress-available-4835"\nI0708 17:29:33.013632       1 vnids.go:115] Allocated netid 11227767 for namespace "e2e-k8s-sig-apps-job-upgrade-6631"\nI0708 17:29:33.020076       1 vnids.go:115] Allocated netid 12632075 for namespace "e2e-control-plane-available-2085"\nI0708 17:29:33.055384       1 vnids.go:115] Allocated netid 8589638 for namespace "e2e-k8s-service-lb-available-1104"\nI0708 17:29:33.066830       1 vnids.go:115] Allocated netid 11349963 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-133"\nI0708 17:29:33.080868       1 vnids.go:115] Allocated netid 7435532 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-7197"\nI0708 17:29:33.096726       1 vnids.go:115] Allocated netid 16090059 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-2485"\nI0708 17:29:33.111809       1 vnids.go:115] Allocated netid 10115656 for namespace "e2e-k8s-sig-apps-deployment-upgrade-7376"\nW0708 17:41:36.157986       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 18143 (22108)\nW0708 17:41:36.158161       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 17864 (25458)\nW0708 17:41:36.493055       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 16142 (25459)\n
Jul 08 17:45:11.900 E ns/openshift-multus pod/multus-g7xd4 node/ip-10-0-143-89.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 08 17:45:15.078 E ns/openshift-sdn pod/ovs-k4h2v node/ip-10-0-128-27.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): 45:13.631Z|00415|bridge|INFO|bridge br0: deleted interface veth8845c121 on port 54\n2020-07-08T17:45:13.632Z|00416|bridge|INFO|bridge br0: deleted interface veth130d1056 on port 47\n2020-07-08T17:45:13.632Z|00417|bridge|INFO|bridge br0: deleted interface veth61b56ba6 on port 59\n2020-07-08T17:45:13.632Z|00418|bridge|INFO|bridge br0: deleted interface veth51e555e9 on port 12\n2020-07-08T17:45:13.632Z|00419|bridge|INFO|bridge br0: deleted interface veth9c88801f on port 62\n2020-07-08T17:45:13.632Z|00420|bridge|INFO|bridge br0: deleted interface veth316cdb57 on port 57\n2020-07-08T17:45:13.632Z|00421|bridge|INFO|bridge br0: deleted interface vethc7bb1bc2 on port 67\n2020-07-08T17:45:13.632Z|00422|bridge|INFO|bridge br0: deleted interface veth97a2549e on port 56\n2020-07-08T17:45:13.632Z|00423|bridge|INFO|bridge br0: deleted interface tun0 on port 2\n2020-07-08T17:45:13.632Z|00424|bridge|INFO|bridge br0: deleted interface veth26c0ec66 on port 11\n2020-07-08T17:45:13.632Z|00425|bridge|INFO|bridge br0: deleted interface veth5aeee670 on port 15\n2020-07-08T17:45:13.632Z|00426|bridge|INFO|bridge br0: deleted interface veth885b0ab0 on port 66\n2020-07-08T17:45:13.632Z|00427|bridge|INFO|bridge br0: deleted interface veth0c1c3f66 on port 55\n2020-07-08T17:45:13.632Z|00428|bridge|INFO|bridge br0: deleted interface veth3c2b82a7 on port 60\n2020-07-08T17:45:13.632Z|00429|bridge|INFO|bridge br0: deleted interface veth6272599e on port 50\n2020-07-08T17:45:13.632Z|00430|bridge|INFO|bridge br0: deleted interface vethba7958cf on port 53\n2020-07-08T17:45:13.632Z|00431|bridge|INFO|bridge br0: deleted interface br0 on port 65534\n2020-07-08T17:45:13.632Z|00432|bridge|INFO|bridge br0: deleted interface vxlan0 on port 1\n2020-07-08T17:45:13.632Z|00433|bridge|INFO|bridge br0: deleted interface vethca78bcb8 on port 65\n2020-07-08T17:45:13.632Z|00434|bridge|INFO|bridge br0: deleted interface vethe305b5b2 on port 45\n2020-07-08 17:45:14 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 08 17:45:18.094 E ns/openshift-sdn pod/sdn-xv6lf node/ip-10-0-128-27.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ng 0 service events\nI0708 17:44:40.907876    2260 proxier.go:350] userspace syncProxyRules took 25.529767ms\nI0708 17:44:54.046519    2260 roundrobin.go:298] LoadBalancerRR: Removing endpoints for openshift-cloud-credential-operator/controller-manager-service:\nI0708 17:44:54.046681    2260 roundrobin.go:298] LoadBalancerRR: Removing endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics\nI0708 17:44:54.175084    2260 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:44:54.175100    2260 proxier.go:350] userspace syncProxyRules took 25.740359ms\nI0708 17:44:54.297568    2260 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:44:54.297584    2260 proxier.go:350] userspace syncProxyRules took 25.266995ms\nI0708 17:44:55.051421    2260 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics to [10.130.0.51:2112]\nI0708 17:44:55.051457    2260 roundrobin.go:218] Delete endpoint 10.130.0.51:2112 for service "openshift-cloud-credential-operator/cco-metrics:cco-metrics"\nI0708 17:44:55.052426    2260 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/controller-manager-service: to [10.130.0.51:443]\nI0708 17:44:55.052499    2260 roundrobin.go:218] Delete endpoint 10.130.0.51:443 for service "openshift-cloud-credential-operator/controller-manager-service:"\nI0708 17:44:55.169161    2260 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:44:55.169179    2260 proxier.go:350] userspace syncProxyRules took 24.835088ms\nI0708 17:44:55.288358    2260 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:44:55.288375    2260 proxier.go:350] userspace syncProxyRules took 25.601412ms\nI0708 17:45:17.000336    2260 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0708 17:45:17.000364    2260 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 08 17:45:30.070 - 29s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 17:45:37.778 E ns/openshift-sdn pod/sdn-xkgkd node/ip-10-0-134-122.us-west-2.compute.internal container=sdn container exited with code 255 (Error): 7:45:28.444574    2342 proxier.go:350] userspace syncProxyRules took 25.187686ms\nI0708 17:45:36.540563    2342 service.go:382] Removing service port "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0708 17:45:36.553515    2342 roundrobin.go:298] LoadBalancerRR: Removing endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:\nI0708 17:45:36.586191    2342 roundrobin.go:236] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.129.0.61:5443 10.130.0.60:5443]\nI0708 17:45:36.586223    2342 roundrobin.go:218] Delete endpoint 10.129.0.61:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0708 17:45:36.586233    2342 roundrobin.go:218] Delete endpoint 10.130.0.60:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0708 17:45:36.695981    2342 roundrobin.go:298] LoadBalancerRR: Removing endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:\nI0708 17:45:36.706812    2342 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:45:36.706836    2342 proxier.go:350] userspace syncProxyRules took 43.38912ms\nI0708 17:45:36.723056    2342 roundrobin.go:236] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.129.0.61:5443 10.130.0.60:5443]\nI0708 17:45:36.723088    2342 roundrobin.go:218] Delete endpoint 10.129.0.61:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0708 17:45:36.723100    2342 roundrobin.go:218] Delete endpoint 10.130.0.60:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0708 17:45:36.733896    2342 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0708 17:45:36.733924    2342 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 08 17:45:54.012 E ns/openshift-sdn pod/ovs-zmxbj node/ip-10-0-128-226.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): last 0 s (5 adds)\n2020-07-08T17:40:47.246Z|00172|connmgr|INFO|br0<->unix#1177: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:40:53.633Z|00173|bridge|INFO|bridge br0: added interface vethd3e9fbf7 on port 30\n2020-07-08T17:40:53.666Z|00174|connmgr|INFO|br0<->unix#1184: 5 flow_mods in the last 0 s (5 adds)\n2020-07-08T17:40:53.704Z|00175|connmgr|INFO|br0<->unix#1187: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:41:00.891Z|00176|connmgr|INFO|br0<->unix#1196: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:41:00.936Z|00177|connmgr|INFO|br0<->unix#1200: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:41:00.971Z|00178|bridge|INFO|bridge br0: deleted interface veth78ef3f22 on port 5\n2020-07-08T17:41:04.648Z|00179|connmgr|INFO|br0<->unix#1205: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:41:04.698Z|00180|connmgr|INFO|br0<->unix#1208: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:41:04.746Z|00181|bridge|INFO|bridge br0: deleted interface veth2d31a12a on port 7\n2020-07-08T17:41:06.743Z|00182|connmgr|INFO|br0<->unix#1213: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:41:06.801Z|00183|connmgr|INFO|br0<->unix#1216: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:41:06.869Z|00184|bridge|INFO|bridge br0: deleted interface vethc357b470 on port 15\n2020-07-08T17:41:32.290Z|00185|connmgr|INFO|br0<->unix#1237: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:41:32.327Z|00186|connmgr|INFO|br0<->unix#1240: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:41:32.371Z|00187|bridge|INFO|bridge br0: deleted interface veth6752dc6e on port 6\n2020-07-08T17:41:32.887Z|00188|connmgr|INFO|br0<->unix#1244: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:41:32.919Z|00189|connmgr|INFO|br0<->unix#1247: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:41:32.960Z|00190|bridge|INFO|bridge br0: deleted interface vethf00f59a8 on port 8\n2020-07-08 17:45:52 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 08 17:45:58.846 E ns/openshift-multus pod/multus-admission-controller-8tn74 node/ip-10-0-134-122.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 08 17:46:08.023 E ns/openshift-multus pod/multus-mn282 node/ip-10-0-128-27.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 08 17:46:31.236 E ns/openshift-sdn pod/sdn-hrrmk node/ip-10-0-157-141.us-west-2.compute.internal container=sdn container exited with code 255 (Error): -operators-coreos-com:"\nI0708 17:46:21.376473   68717 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:46:21.376496   68717 proxier.go:350] userspace syncProxyRules took 30.588151ms\nI0708 17:46:21.509476   68717 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:46:21.509500   68717 proxier.go:350] userspace syncProxyRules took 29.50256ms\nI0708 17:46:28.003085   68717 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.129.0.67:5443 10.130.0.60:5443 10.130.0.67:5443]\nI0708 17:46:28.003116   68717 roundrobin.go:218] Delete endpoint 10.130.0.67:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0708 17:46:28.043197   68717 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.129.0.67:5443 10.130.0.67:5443]\nI0708 17:46:28.043256   68717 roundrobin.go:218] Delete endpoint 10.130.0.60:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0708 17:46:28.131605   68717 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:46:28.131634   68717 proxier.go:350] userspace syncProxyRules took 28.77071ms\nI0708 17:46:28.260645   68717 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:46:28.260676   68717 proxier.go:350] userspace syncProxyRules took 29.816006ms\nI0708 17:46:30.840655   68717 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-1104/service-test: to [10.131.0.18:80]\nI0708 17:46:30.840686   68717 roundrobin.go:218] Delete endpoint 10.129.2.14:80 for service "e2e-k8s-service-lb-available-1104/service-test:"\nI0708 17:46:30.877490   68717 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0708 17:46:30.877525   68717 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 08 17:46:47.071 E ns/openshift-multus pod/multus-admission-controller-ck4lm node/ip-10-0-128-27.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 08 17:46:50.028 E ns/openshift-sdn pod/ovs-mpvxk node/ip-10-0-143-89.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:41:08.145Z|00166|bridge|INFO|bridge br0: deleted interface veth293cecb6 on port 14\n2020-07-08T17:41:16.935Z|00167|connmgr|INFO|br0<->unix#1192: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:41:16.962Z|00168|connmgr|INFO|br0<->unix#1195: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:41:16.984Z|00169|bridge|INFO|bridge br0: deleted interface veth4a4f7ac7 on port 15\n2020-07-08T17:41:27.057Z|00170|bridge|INFO|bridge br0: added interface vethdd22ae00 on port 28\n2020-07-08T17:41:27.089Z|00171|connmgr|INFO|br0<->unix#1204: 5 flow_mods in the last 0 s (5 adds)\n2020-07-08T17:41:27.142Z|00172|connmgr|INFO|br0<->unix#1207: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:45:39.325Z|00173|connmgr|INFO|br0<->unix#1395: 2 flow_mods in the last 0 s (2 adds)\n2020-07-08T17:45:39.371Z|00174|connmgr|INFO|br0<->unix#1399: 1 flow_mods in the last 0 s (1 adds)\n2020-07-08T17:45:39.606Z|00175|connmgr|INFO|br0<->unix#1407: 3 flow_mods in the last 0 s (3 adds)\n2020-07-08T17:45:39.629Z|00176|connmgr|INFO|br0<->unix#1410: 1 flow_mods in the last 0 s (1 adds)\n2020-07-08T17:45:39.651Z|00177|connmgr|INFO|br0<->unix#1413: 3 flow_mods in the last 0 s (3 adds)\n2020-07-08T17:45:39.675Z|00178|connmgr|INFO|br0<->unix#1416: 1 flow_mods in the last 0 s (1 adds)\n2020-07-08T17:45:39.701Z|00179|connmgr|INFO|br0<->unix#1419: 3 flow_mods in the last 0 s (3 adds)\n2020-07-08T17:45:39.731Z|00180|connmgr|INFO|br0<->unix#1422: 1 flow_mods in the last 0 s (1 adds)\n2020-07-08T17:45:39.764Z|00181|connmgr|INFO|br0<->unix#1425: 3 flow_mods in the last 0 s (3 adds)\n2020-07-08T17:45:39.792Z|00182|connmgr|INFO|br0<->unix#1428: 1 flow_mods in the last 0 s (1 adds)\n2020-07-08T17:45:39.814Z|00183|connmgr|INFO|br0<->unix#1431: 3 flow_mods in the last 0 s (3 adds)\n2020-07-08T17:45:39.838Z|00184|connmgr|INFO|br0<->unix#1434: 1 flow_mods in the last 0 s (1 adds)\n2020-07-08 17:46:49 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 08 17:46:52.050 E ns/openshift-sdn pod/sdn-km84q node/ip-10-0-143-89.us-west-2.compute.internal container=sdn container exited with code 255 (Error): :5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0708 17:46:28.051723   53257 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.129.0.67:5443 10.130.0.67:5443]\nI0708 17:46:28.051760   53257 roundrobin.go:218] Delete endpoint 10.130.0.60:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0708 17:46:28.148874   53257 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:46:28.148901   53257 proxier.go:350] userspace syncProxyRules took 28.340013ms\nI0708 17:46:28.284704   53257 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:46:28.284729   53257 proxier.go:350] userspace syncProxyRules took 27.965151ms\nI0708 17:46:30.848221   53257 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-1104/service-test: to [10.131.0.18:80]\nI0708 17:46:30.848263   53257 roundrobin.go:218] Delete endpoint 10.129.2.14:80 for service "e2e-k8s-service-lb-available-1104/service-test:"\nI0708 17:46:30.978221   53257 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:46:30.978246   53257 proxier.go:350] userspace syncProxyRules took 27.707099ms\nI0708 17:46:32.852040   53257 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-1104/service-test: to [10.129.2.14:80 10.131.0.18:80]\nI0708 17:46:32.852083   53257 roundrobin.go:218] Delete endpoint 10.129.2.14:80 for service "e2e-k8s-service-lb-available-1104/service-test:"\nI0708 17:46:32.978257   53257 proxier.go:371] userspace proxy: processing 0 service events\nI0708 17:46:32.978282   53257 proxier.go:350] userspace syncProxyRules took 26.925139ms\nI0708 17:46:51.927818   53257 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0708 17:46:51.927858   53257 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 08 17:47:53.239 E ns/openshift-multus pod/multus-nlzrj node/ip-10-0-134-122.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 08 17:48:49.778 E ns/openshift-multus pod/multus-97t6h node/ip-10-0-156-186.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 08 17:49:43.615 E ns/openshift-multus pod/multus-xmbvt node/ip-10-0-128-226.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 08 17:50:34.600 E ns/openshift-machine-config-operator pod/machine-config-operator-674868f7fc-9qbs4 node/ip-10-0-128-27.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): 68 (16140)\nW0708 17:41:36.019039       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 16141 (21952)\nW0708 17:41:36.116738       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 18024 (21027)\nW0708 17:41:36.116909       1 reflector.go:299] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.CustomResourceDefinition ended with: too old resource version: 19383 (21024)\nW0708 17:41:37.020053       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 16233 (21056)\nW0708 17:41:37.020664       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 16150 (22065)\nW0708 17:41:37.025523       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 20379 (23301)\nW0708 17:41:37.025607       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 16140 (22511)\nW0708 17:41:37.025739       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 16142 (22515)\nW0708 17:41:37.026335       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 16150 (22064)\nW0708 17:41:37.042227       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 18282 (21032)\n
Jul 08 17:52:29.912 E ns/openshift-machine-config-operator pod/machine-config-daemon-xhvbv node/ip-10-0-128-226.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 08 17:52:40.912 E ns/openshift-machine-config-operator pod/machine-config-daemon-qntr7 node/ip-10-0-128-27.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 08 17:52:58.779 E ns/openshift-machine-config-operator pod/machine-config-daemon-hrgs6 node/ip-10-0-143-89.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 08 17:53:38.214 E ns/openshift-machine-config-operator pod/machine-config-daemon-xbb4r node/ip-10-0-157-141.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 08 17:56:13.382 E ns/openshift-machine-config-operator pod/machine-config-server-wlllt node/ip-10-0-128-27.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0708 17:16:13.885738       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-2-g738d844d-dirty (738d844de27ff701ed022862cafda4431a6c074f)\nI0708 17:16:13.886345       1 api.go:56] Launching server on :22624\nI0708 17:16:13.886376       1 api.go:56] Launching server on :22623\nI0708 17:17:00.327880       1 api.go:102] Pool worker requested by 10.0.132.35:50914\n
Jul 08 17:56:15.588 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7697d7b57b-tdgp9 node/ip-10-0-128-27.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 17678 (25468)\nW0708 17:41:37.207917       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 18196 (22115)\nW0708 17:41:37.208045       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 21964 (24722)\nW0708 17:41:37.208182       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 17689 (24424)\nW0708 17:41:37.210794       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 20669 (22108)\nW0708 17:41:37.210823       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 18143 (22108)\nW0708 17:45:42.745063       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 27392 (27406)\nW0708 17:46:10.484853       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 27704 (27713)\nW0708 17:52:13.491006       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 29831 (29963)\nW0708 17:52:16.233838       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 29963 (29975)\nI0708 17:56:14.215032       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0708 17:56:14.215084       1 leaderelection.go:66] leaderelection lost\n
Jul 08 17:56:16.602 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-7ddccc9c4f-prqcg node/ip-10-0-128-27.us-west-2.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): Map()\nI0708 17:40:31.652767       1 tuned_controller.go:320] syncDaemonSet()\nI0708 17:40:32.261990       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0708 17:40:32.262013       1 status.go:25] syncOperatorStatus()\nI0708 17:40:32.272558       1 tuned_controller.go:188] syncServiceAccount()\nI0708 17:40:32.272683       1 tuned_controller.go:215] syncClusterRole()\nI0708 17:40:32.311306       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0708 17:40:32.352938       1 tuned_controller.go:281] syncClusterConfigMap()\nI0708 17:40:32.359245       1 tuned_controller.go:281] syncClusterConfigMap()\nI0708 17:40:32.362543       1 tuned_controller.go:320] syncDaemonSet()\nI0708 17:40:42.193181       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0708 17:40:42.193476       1 status.go:25] syncOperatorStatus()\nI0708 17:40:42.216877       1 tuned_controller.go:188] syncServiceAccount()\nI0708 17:40:42.217079       1 tuned_controller.go:215] syncClusterRole()\nI0708 17:40:42.265178       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0708 17:40:42.332256       1 tuned_controller.go:281] syncClusterConfigMap()\nI0708 17:40:42.360118       1 tuned_controller.go:281] syncClusterConfigMap()\nI0708 17:40:42.366281       1 tuned_controller.go:320] syncDaemonSet()\nI0708 17:50:26.444969       1 tuned_controller.go:422] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0708 17:50:26.444995       1 status.go:25] syncOperatorStatus()\nI0708 17:50:26.454091       1 tuned_controller.go:188] syncServiceAccount()\nI0708 17:50:26.454255       1 tuned_controller.go:215] syncClusterRole()\nI0708 17:50:26.494030       1 tuned_controller.go:248] syncClusterRoleBinding()\nI0708 17:50:26.526848       1 tuned_controller.go:281] syncClusterConfigMap()\nI0708 17:50:26.530330       1 tuned_controller.go:281] syncClusterConfigMap()\nI0708 17:50:26.534403       1 tuned_controller.go:320] syncDaemonSet()\nF0708 17:56:15.654480       1 main.go:82] <nil>\n
Jul 08 17:56:17.515 E ns/openshift-machine-config-operator pod/machine-config-server-p2f7h node/ip-10-0-134-122.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0708 17:16:13.921532       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-2-g738d844d-dirty (738d844de27ff701ed022862cafda4431a6c074f)\nI0708 17:16:13.922462       1 api.go:56] Launching server on :22624\nI0708 17:16:13.922570       1 api.go:56] Launching server on :22623\nI0708 17:17:02.778402       1 api.go:102] Pool worker requested by 10.0.146.245:40377\n
Jul 08 17:56:17.649 E ns/openshift-console pod/console-5495ddb945-nzvs9 node/ip-10-0-128-27.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/07/8 17:42:42 cmd/main: cookies are secure!\n2020/07/8 17:42:42 cmd/main: Binding to [::]:8443...\n2020/07/8 17:42:42 cmd/main: using TLS\n2020/07/8 17:44:48 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n2020/07/8 17:44:53 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/07/8 17:44:58 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/07/8 17:44:59 auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Jul 08 17:56:34.788 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-128-226.us-west-2.compute.internal container=alertmanager-proxy container exited with code 1 (Error): 2020/07/08 17:56:33 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/08 17:56:33 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/08 17:56:33 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/08 17:56:33 main.go:138: Invalid configuration:\n  unable to load OpenShift configuration: unable to retrieve authentication information for tokens: Post https://172.30.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n
Jul 08 17:57:14.137 E ns/openshift-cluster-node-tuning-operator pod/tuned-s98qq node/ip-10-0-156-186.us-west-2.compute.internal container=tuned container exited with code 143 (Error): ft-tuned.go:441] Getting recommended profile...\nI0708 17:50:35.993269   67623 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 17:53:32.025141   67623 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-hg94j) labels changed node wide: true\nI0708 17:53:35.891256   67623 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:53:35.892567   67623 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:53:35.992845   67623 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 17:54:12.022713   67623 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-controller-55d5fdbfd4-ctrgw) labels changed node wide: true\nI0708 17:54:15.891236   67623 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:54:15.892574   67623 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:54:15.990341   67623 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 17:56:12.022178   67623 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-server-bnc7t) labels changed node wide: true\nI0708 17:56:15.891250   67623 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:56:15.892941   67623 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:56:16.020055   67623 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 17:56:31.218134   67623 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0708 17:56:31.222784   67623 openshift-tuned.go:881] Pod event watch channel closed.\nI0708 17:56:31.222850   67623 openshift-tuned.go:883] Increasing resyncPeriod to 104\n
Jul 08 17:57:48.267 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Alertmanager host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io alertmanager-main)
Jul 08 17:58:04.096 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Jul 08 17:58:34.164 E ns/openshift-cluster-node-tuning-operator pod/tuned-fh2j5 node/ip-10-0-143-89.us-west-2.compute.internal container=tuned container exited with code 143 (Error): pod-labels.cfg\nI0708 17:52:27.090662   42215 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:52:27.219026   42215 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 17:53:06.490121   42215 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-hrgs6) labels changed node wide: true\nI0708 17:53:07.083835   42215 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:53:07.085544   42215 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:53:07.209477   42215 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 17:56:13.390203   42215 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-7376/dp-657fc4b57d-ldl2j) labels changed node wide: true\nI0708 17:56:17.083872   42215 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:56:17.090718   42215 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:56:17.219696   42215 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 17:56:26.489792   42215 openshift-tuned.go:550] Pod (openshift-monitoring/openshift-state-metrics-59fc9ffdc8-7qzst) labels changed node wide: true\nI0708 17:56:27.083879   42215 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:56:27.085852   42215 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:56:27.199607   42215 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 17:56:31.221909   42215 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0708 17:56:31.225881   42215 openshift-tuned.go:881] Pod event watch channel closed.\nI0708 17:56:31.225901   42215 openshift-tuned.go:883] Increasing resyncPeriod to 134\n
Jul 08 17:58:34.181 E ns/openshift-monitoring pod/node-exporter-nvvl8 node/ip-10-0-143-89.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-08T17:40:45Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-08T17:40:45Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 08 17:58:34.237 E ns/openshift-multus pod/multus-mmw5b node/ip-10-0-143-89.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 08 17:58:34.245 E ns/openshift-sdn pod/ovs-wjblx node/ip-10-0-143-89.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): 56:14.330Z|00154|bridge|INFO|bridge br0: deleted interface veth141882bc on port 3\n2020-07-08T17:56:14.411Z|00155|connmgr|INFO|br0<->unix#545: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:14.481Z|00156|connmgr|INFO|br0<->unix#548: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:14.521Z|00157|bridge|INFO|bridge br0: deleted interface veth118aa4bd on port 6\n2020-07-08T17:56:14.564Z|00158|connmgr|INFO|br0<->unix#551: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:14.617Z|00159|connmgr|INFO|br0<->unix#554: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:14.649Z|00160|bridge|INFO|bridge br0: deleted interface veth28ec4117 on port 13\n2020-07-08T17:56:14.709Z|00161|connmgr|INFO|br0<->unix#557: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:14.766Z|00162|connmgr|INFO|br0<->unix#560: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:14.806Z|00163|bridge|INFO|bridge br0: deleted interface vethdd22ae00 on port 15\n2020-07-08T17:56:43.716Z|00164|connmgr|INFO|br0<->unix#584: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:43.745Z|00165|connmgr|INFO|br0<->unix#587: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:43.766Z|00166|bridge|INFO|bridge br0: deleted interface veth11028ecb on port 10\n2020-07-08T17:56:43.812Z|00167|connmgr|INFO|br0<->unix#590: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:43.842Z|00168|connmgr|INFO|br0<->unix#593: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:43.863Z|00169|bridge|INFO|bridge br0: deleted interface veth67a1abe6 on port 11\n2020-07-08T17:56:43.852Z|00020|jsonrpc|WARN|Dropped 5 log messages in last 592 seconds (most recently, 591 seconds ago) due to excessive rate\n2020-07-08T17:56:43.852Z|00021|jsonrpc|WARN|unix#529: receive error: Connection reset by peer\n2020-07-08T17:56:43.852Z|00022|reconnect|WARN|unix#529: connection dropped (Connection reset by peer)\n2020-07-08 17:56:45 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 08 17:58:34.265 E ns/openshift-machine-config-operator pod/machine-config-daemon-fzmnz node/ip-10-0-143-89.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 08 17:58:39.718 E ns/openshift-multus pod/multus-mmw5b node/ip-10-0-143-89.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 08 17:58:42.540 E ns/openshift-machine-config-operator pod/machine-config-daemon-fzmnz node/ip-10-0-143-89.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 08 17:58:50.983 E ns/openshift-monitoring pod/telemeter-client-84987d8c84-sqcwp node/ip-10-0-157-141.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Jul 08 17:58:50.983 E ns/openshift-monitoring pod/telemeter-client-84987d8c84-sqcwp node/ip-10-0-157-141.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Jul 08 17:58:51.030 E ns/openshift-ingress pod/router-default-8459c5449c-9jqjj node/ip-10-0-157-141.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:56:43.388844       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:56:48.390002       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:56:53.390085       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:57:11.256074       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:57:16.252446       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:57:21.526811       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:57:26.512817       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:58:38.796884       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:58:43.776270       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0708 17:58:48.776522       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Jul 08 17:58:52.119 E ns/openshift-monitoring pod/prometheus-adapter-6764cf58b6-bw9d4 node/ip-10-0-157-141.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0708 17:41:00.093899       1 adapter.go:93] successfully using in-cluster auth\nI0708 17:41:01.597311       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 08 17:59:09.890 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=cluster-policy-controller-8 container exited with code 1 (Error): I0708 17:39:15.199713       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0708 17:39:15.200993       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0708 17:39:15.201037       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0708 17:39:46.362525       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: configmaps "cluster-policy-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\n
Jul 08 17:59:09.890 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-8 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:55:20.596999       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:55:20.597309       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:55:30.604903       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:55:30.605157       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:55:40.611807       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:55:40.612092       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:55:50.620763       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:55:50.621025       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:56:00.628004       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:56:00.628274       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:56:10.635542       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:56:10.635830       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:56:20.643894       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:56:20.644141       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:56:30.656078       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:56:30.656364       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Jul 08 17:59:09.890 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=kube-controller-manager-8 container exited with code 2 (Error): 30155 +0000 UTC))\nI0708 17:38:23.920225       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1594229903" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1594229903" (2020-07-08 16:38:23 +0000 UTC to 2021-07-08 16:38:23 +0000 UTC (now=2020-07-08 17:38:23.920216197 +0000 UTC))\nI0708 17:38:23.920249       1 secure_serving.go:178] Serving securely on [::]:10257\nI0708 17:38:23.920298       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0708 17:38:23.921105       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\nI0708 17:38:23.921147       1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\nI0708 17:38:23.921113       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nI0708 17:38:23.921205       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt\nE0708 17:39:30.387830       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0708 17:39:34.752004       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0708 17:39:46.293827       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Jul 08 17:59:09.934 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): ntication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1594228491" [] issuer="kubelet-signer" (2020-07-08 17:14:50 +0000 UTC to 2020-07-09 16:57:05 +0000 UTC (now=2020-07-08 17:39:46.686345735 +0000 UTC))\nI0708 17:39:46.686382       1 tlsconfig.go:179] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-07-08 16:57:02 +0000 UTC to 2020-07-09 16:57:02 +0000 UTC (now=2020-07-08 17:39:46.686370806 +0000 UTC))\nI0708 17:39:46.686586       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1594228493" (2020-07-08 17:15:17 +0000 UTC to 2022-07-08 17:15:18 +0000 UTC (now=2020-07-08 17:39:46.686578313 +0000 UTC))\nI0708 17:39:46.686793       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1594229986" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1594229986" (2020-07-08 16:39:45 +0000 UTC to 2021-07-08 16:39:45 +0000 UTC (now=2020-07-08 17:39:46.686784274 +0000 UTC))\nI0708 17:39:46.686863       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1594229986" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1594229986" (2020-07-08 16:39:45 +0000 UTC to 2021-07-08 16:39:45 +0000 UTC (now=2020-07-08 17:39:46.686855669 +0000 UTC))\nI0708 17:39:46.767907       1 leaderelection.go:241] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n
Jul 08 17:59:10.006 E ns/openshift-cluster-node-tuning-operator pod/tuned-cfh5h node/ip-10-0-128-27.us-west-2.compute.internal container=tuned container exited with code 143 (Error): er-4-ip-10-0-128-27.us-west-2.compute.internal) labels changed node wide: false\nI0708 17:56:14.143284   68885 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-6-ip-10-0-128-27.us-west-2.compute.internal) labels changed node wide: false\nI0708 17:56:14.151994   68885 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-7-ip-10-0-128-27.us-west-2.compute.internal) labels changed node wide: false\nI0708 17:56:14.160033   68885 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-5-ip-10-0-128-27.us-west-2.compute.internal) labels changed node wide: false\nI0708 17:56:14.450181   68885 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-server-wlllt) labels changed node wide: true\nI0708 17:56:16.711019   68885 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:56:16.712908   68885 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:56:16.849510   68885 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 17:56:16.849987   68885 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-4-ip-10-0-128-27.us-west-2.compute.internal) labels changed node wide: false\nI0708 17:56:16.850796   68885 openshift-tuned.go:550] Pod (openshift-kube-controller-manager-operator/kube-controller-manager-operator-7697d7b57b-tdgp9) labels changed node wide: true\nI0708 17:56:21.711014   68885 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:56:21.712832   68885 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:56:21.811865   68885 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 17:56:30.032877   68885 openshift-tuned.go:550] Pod (openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator-646b4d6fc-d6mwj) labels changed node wide: true\n
Jul 08 17:59:10.038 E ns/openshift-monitoring pod/node-exporter-dfnlc node/ip-10-0-128-27.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-08T17:42:04Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-08T17:42:04Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 08 17:59:10.049 E ns/openshift-controller-manager pod/controller-manager-p6dkx node/ip-10-0-128-27.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 08 17:59:10.062 E ns/openshift-sdn pod/sdn-controller-lvf6j node/ip-10-0-128-27.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0708 17:45:14.481547       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Jul 08 17:59:10.077 E ns/openshift-sdn pod/ovs-d6lmz node/ip-10-0-128-27.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error):  the last 0 s (4 deletes)\n2020-07-08T17:56:15.471Z|00204|bridge|INFO|bridge br0: deleted interface veth2803196c on port 22\n2020-07-08T17:56:15.509Z|00205|connmgr|INFO|br0<->unix#695: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:15.556Z|00206|connmgr|INFO|br0<->unix#698: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:15.609Z|00207|bridge|INFO|bridge br0: deleted interface veth3c2b82a7 on port 12\n2020-07-08T17:56:15.656Z|00208|connmgr|INFO|br0<->unix#701: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:15.696Z|00209|connmgr|INFO|br0<->unix#704: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:15.789Z|00210|bridge|INFO|bridge br0: deleted interface vethca78bcb8 on port 21\n2020-07-08T17:56:16.044Z|00211|connmgr|INFO|br0<->unix#707: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:16.076Z|00212|connmgr|INFO|br0<->unix#710: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:16.106Z|00213|bridge|INFO|bridge br0: deleted interface veth0c1c3f66 on port 16\n2020-07-08T17:56:16.554Z|00214|connmgr|INFO|br0<->unix#715: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:16.592Z|00215|connmgr|INFO|br0<->unix#718: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:16.622Z|00216|bridge|INFO|bridge br0: deleted interface veth8845c121 on port 15\n2020-07-08T17:56:17.161Z|00217|connmgr|INFO|br0<->unix#721: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:17.196Z|00218|connmgr|INFO|br0<->unix#724: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:17.227Z|00219|bridge|INFO|bridge br0: deleted interface vethc7bb1bc2 on port 13\n2020-07-08T17:56:17.473Z|00220|connmgr|INFO|br0<->unix#727: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:56:17.507Z|00221|connmgr|INFO|br0<->unix#730: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:56:17.538Z|00222|bridge|INFO|bridge br0: deleted interface veth316cdb57 on port 5\n2020-07-08 17:56:31 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 08 17:59:10.107 E ns/openshift-multus pod/multus-29hqc node/ip-10-0-128-27.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 08 17:59:10.118 E ns/openshift-multus pod/multus-admission-controller-gfqp9 node/ip-10-0-128-27.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Jul 08 17:59:10.179 E ns/openshift-machine-config-operator pod/machine-config-daemon-26lpf node/ip-10-0-128-27.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 08 17:59:10.196 E ns/openshift-machine-config-operator pod/machine-config-server-v77b8 node/ip-10-0-128-27.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0708 17:56:16.323691       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-2-g738d844d-dirty (738d844de27ff701ed022862cafda4431a6c074f)\nI0708 17:56:16.324470       1 api.go:56] Launching server on :22624\nI0708 17:56:16.324551       1 api.go:56] Launching server on :22623\n
Jul 08 17:59:11.422 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-128-27.us-west-2.compute.internal" not ready since 2020-07-08 17:59:08 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)
Jul 08 17:59:11.436 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-128-27.us-west-2.compute.internal" not ready since 2020-07-08 17:59:08 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)
Jul 08 17:59:13.628 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 79: connect: connection refused". Reconnecting...\nE0708 17:56:30.851062       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.851208       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.851824       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.871571       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.871682       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.871777       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.871779       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.871818       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.872156       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.872662       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.873040       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 17:56:30.873153       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0708 17:56:30.986712       1 controller.go:606] quota admission added evaluator for: daemonsets.apps\nI0708 17:56:31.017926       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-128-27.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0708 17:56:31.018144       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Jul 08 17:59:13.628 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0708 17:38:22.991273       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Jul 08 17:59:13.628 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-27.us-west-2.compute.internal node/ip-10-0-128-27.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0708 17:49:47.366481       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:49:47.366794       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0708 17:49:47.573216       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:49:47.573453       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Jul 08 17:59:14.606 E ns/openshift-multus pod/multus-29hqc node/ip-10-0-128-27.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 08 17:59:16.672 E ns/openshift-multus pod/multus-29hqc node/ip-10-0-128-27.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 08 17:59:21.707 E ns/openshift-machine-config-operator pod/machine-config-daemon-26lpf node/ip-10-0-128-27.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 08 17:59:31.445 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-143-89.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-07-08T17:59:08.898Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-08T17:59:08.901Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-08T17:59:08.902Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-08T17:59:08.903Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-08T17:59:08.903Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-07-08T17:59:08.904Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-08T17:59:08.904Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-08T17:59:08.904Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-08T17:59:08.904Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-08T17:59:08.904Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-08T17:59:08.904Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-08T17:59:08.904Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-08T17:59:08.904Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-08T17:59:08.904Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-07-08T17:59:08.905Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-08T17:59:08.905Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-07-08
Jul 08 17:59:58.659 E ns/openshift-machine-api pod/machine-api-controllers-7689d5dd9-wgl2q node/ip-10-0-134-122.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 08 18:00:00.063 E ns/openshift-authentication-operator pod/authentication-operator-6dff88cc8c-jxvhc node/ip-10-0-134-122.us-west-2.compute.internal container=operator container exited with code 255 (Error): rsion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "OperatorSyncDegraded: the server is currently unable to handle the request (post oauthclients.oauth.openshift.io)" to "RouteHealthDegraded: failed to GET route: dial tcp: lookup oauth-openshift.apps.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: read udp 10.128.0.53:32936->172.30.0.10:53: i/o timeout"\nI0708 17:57:26.240204       1 status_controller.go:166] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-07-08T17:22:01Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-08T17:42:00Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-08T17:28:07Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-08T17:19:02Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0708 17:57:26.246378       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"0019f60d-faa7-439b-834a-09e85246cbee", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: dial tcp: lookup oauth-openshift.apps.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: read udp 10.128.0.53:32936->172.30.0.10:53: i/o timeout" to ""\nW0708 17:59:36.740749       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 34084 (34099)\nI0708 17:59:57.706831       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0708 17:59:57.706891       1 leaderelection.go:66] leaderelection lost\n
Jul 08 18:00:00.212 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-646b4d6fc-nx9pb node/ip-10-0-134-122.us-west-2.compute.internal container=operator container exited with code 255 (Error): : (5.878089ms) 200 [Prometheus/2.14.0 10.131.0.27:48750]\nI0708 17:59:26.673317       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0708 17:59:36.515865       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 47 items received\nI0708 17:59:36.694230       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0708 17:59:37.094422       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0708 17:59:37.094449       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0708 17:59:37.095758       1 httplog.go:90] GET /metrics: (6.099341ms) 200 [Prometheus/2.14.0 10.128.2.14:41292]\nI0708 17:59:39.256415       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 3 items received\nI0708 17:59:46.703770       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0708 17:59:48.090572       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0708 17:59:48.090592       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0708 17:59:48.091734       1 httplog.go:90] GET /metrics: (5.790164ms) 200 [Prometheus/2.14.0 10.131.0.27:48750]\nI0708 17:59:56.713341       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0708 17:59:57.583590       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0708 17:59:57.583687       1 leaderelection.go:66] leaderelection lost\n
Jul 08 18:00:00.245 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-84dfdgn4g node/ip-10-0-134-122.us-west-2.compute.internal container=operator container exited with code 255 (Error):        1 httplog.go:90] GET /metrics: (1.541768ms) 200 [Prometheus/2.14.0 10.129.2.23:34338]\nI0708 17:57:44.717376       1 httplog.go:90] GET /metrics: (4.588919ms) 200 [Prometheus/2.14.0 10.131.0.27:54152]\nI0708 17:57:46.619627       1 httplog.go:90] GET /metrics: (1.044173ms) 200 [Prometheus/2.14.0 10.129.2.23:34338]\nI0708 17:58:14.717582       1 httplog.go:90] GET /metrics: (4.780155ms) 200 [Prometheus/2.14.0 10.131.0.27:54152]\nI0708 17:58:16.619730       1 httplog.go:90] GET /metrics: (1.13101ms) 200 [Prometheus/2.14.0 10.129.2.23:34338]\nI0708 17:58:44.717425       1 httplog.go:90] GET /metrics: (4.660437ms) 200 [Prometheus/2.14.0 10.131.0.27:54152]\nI0708 17:58:46.619806       1 httplog.go:90] GET /metrics: (1.17228ms) 200 [Prometheus/2.14.0 10.129.2.23:34338]\nI0708 17:59:14.717279       1 httplog.go:90] GET /metrics: (4.543273ms) 200 [Prometheus/2.14.0 10.131.0.27:54152]\nI0708 17:59:36.525812       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 84 items received\nW0708 17:59:36.738754       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 34084 (34099)\nI0708 17:59:37.741124       1 reflector.go:158] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0708 17:59:39.255398       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 3 items received\nI0708 17:59:44.719722       1 httplog.go:90] GET /metrics: (6.991752ms) 200 [Prometheus/2.14.0 10.131.0.27:54152]\nI0708 17:59:46.622133       1 httplog.go:90] GET /metrics: (1.145459ms) 200 [Prometheus/2.14.0 10.128.2.14:34626]\nI0708 17:59:57.581899       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0708 17:59:57.582000       1 leaderelection.go:66] leaderelection lost\n
Jul 08 18:00:00.507 E ns/openshift-console-operator pod/console-operator-549dd886cb-4szr4 node/ip-10-0-134-122.us-west-2.compute.internal container=console-operator container exited with code 255 (Error): cannot be created (the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console))" to "RouteSyncDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)\nOAuthClientSyncDegraded: oauth client for console does not exist and cannot be created (the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console))"\nE0708 17:57:32.460425       1 controller.go:280] clidownloads-sync-work-queue-key failed with : the server is currently unable to handle the request (get routes.route.openshift.io downloads)\nI0708 17:57:32.515777       1 status_controller.go:175] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-07-08T17:19:28Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-08T17:43:07Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-08T17:57:32Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-08T17:19:28Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0708 17:57:32.523671       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"416bb065-509b-4970-b10c-3eb2e92ed6ed", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "RouteSyncDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)\nOAuthClientSyncDegraded: oauth client for console does not exist and cannot be created (the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console))" to "",Available changed from False to True ("")\nI0708 17:59:57.808834       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0708 17:59:57.808887       1 leaderelection.go:66] leaderelection lost\n
Jul 08 18:00:01.381 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-55879584c4-sztt2 node/ip-10-0-134-122.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ert [\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\"]: \\\"scheduler.openshift-kube-scheduler.svc\\\" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer=\\\"openshift-service-serving-signer@1594228493\\\" (2020-07-08 17:15:17 +0000 UTC to 2022-07-08 17:15:18 +0000 UTC (now=2020-07-08 17:39:46.686578313 +0000 UTC))\\nI0708 17:39:46.686793       1 named_certificates.go:53] loaded SNI cert [0/\\\"self-signed loopback\\\"]: \\\"apiserver-loopback-client@1594229986\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1594229986\\\" (2020-07-08 16:39:45 +0000 UTC to 2021-07-08 16:39:45 +0000 UTC (now=2020-07-08 17:39:46.686784274 +0000 UTC))\\nI0708 17:39:46.686863       1 named_certificates.go:74] snimap[\\\"apiserver-loopback-client\\\"]: \\\"apiserver-loopback-client@1594229986\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1594229986\\\" (2020-07-08 16:39:45 +0000 UTC to 2021-07-08 16:39:45 +0000 UTC (now=2020-07-08 17:39:46.686855669 +0000 UTC))\\nI0708 17:39:46.767907       1 leaderelection.go:241] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\\n\"\nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-128-27.us-west-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-128-27.us-west-2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nW0708 17:59:36.747723       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 34084 (34099)\nI0708 17:59:59.005176       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0708 17:59:59.005225       1 leaderelection.go:66] leaderelection lost\n
Jul 08 18:00:02.342 E ns/openshift-cluster-machine-approver pod/machine-approver-5f64546d6-cbnzv node/ip-10-0-134-122.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0708 17:40:23.016883       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0708 17:40:23.016903       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0708 17:40:23.016942       1 main.go:236] Starting Machine Approver\nI0708 17:40:23.119771       1 main.go:146] CSR csr-wq4qd added\nI0708 17:40:23.119795       1 main.go:149] CSR csr-wq4qd is already approved\nI0708 17:40:23.119813       1 main.go:146] CSR csr-dhs2h added\nI0708 17:40:23.119819       1 main.go:149] CSR csr-dhs2h is already approved\nI0708 17:40:23.119831       1 main.go:146] CSR csr-p6rrv added\nI0708 17:40:23.119837       1 main.go:149] CSR csr-p6rrv is already approved\nI0708 17:40:23.119846       1 main.go:146] CSR csr-9jcct added\nI0708 17:40:23.119852       1 main.go:149] CSR csr-9jcct is already approved\nI0708 17:40:23.119860       1 main.go:146] CSR csr-bmmkx added\nI0708 17:40:23.119865       1 main.go:149] CSR csr-bmmkx is already approved\nI0708 17:40:23.119873       1 main.go:146] CSR csr-cqck9 added\nI0708 17:40:23.119878       1 main.go:149] CSR csr-cqck9 is already approved\nI0708 17:40:23.119886       1 main.go:146] CSR csr-kfw6x added\nI0708 17:40:23.119892       1 main.go:149] CSR csr-kfw6x is already approved\nI0708 17:40:23.119899       1 main.go:146] CSR csr-rxx5z added\nI0708 17:40:23.119905       1 main.go:149] CSR csr-rxx5z is already approved\nI0708 17:40:23.119913       1 main.go:146] CSR csr-xd7kk added\nI0708 17:40:23.119919       1 main.go:149] CSR csr-xd7kk is already approved\nI0708 17:40:23.119928       1 main.go:146] CSR csr-49rhb added\nI0708 17:40:23.119934       1 main.go:149] CSR csr-49rhb is already approved\nI0708 17:40:23.119945       1 main.go:146] CSR csr-98jgp added\nI0708 17:40:23.119950       1 main.go:149] CSR csr-98jgp is already approved\nI0708 17:40:23.119958       1 main.go:146] CSR csr-z69sh added\nI0708 17:40:23.119964       1 main.go:149] CSR csr-z69sh is already approved\n
Jul 08 18:00:05.749 E ns/openshift-machine-config-operator pod/machine-config-operator-6479d9c4f8-ch29t node/ip-10-0-134-122.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-machine-config-operator_machine-config-operator-6479d9c4f8-ch29t_751ed14d-18ee-4c42-93b1-c523be41a711/machine-config-operator/0.log": lstat /var/log/pods/openshift-machine-config-operator_machine-config-operator-6479d9c4f8-ch29t_751ed14d-18ee-4c42-93b1-c523be41a711/machine-config-operator/0.log: no such file or directory
Jul 08 18:00:06.296 E ns/openshift-service-ca pod/configmap-cabundle-injector-84bf6dd6f8-6vwkb node/ip-10-0-134-122.us-west-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Jul 08 18:00:28.475 E kube-apiserver failed contacting the API: Get https://api.ci-op-h0g8pbdd-e2350.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=35276&timeout=5m48s&timeoutSeconds=348&watch=true: dial tcp 54.185.195.71:6443: connect: connection refused
Jul 08 18:01:15.070 - 29s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:01:27.597 E ns/openshift-monitoring pod/node-exporter-q4nkm node/ip-10-0-157-141.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-08T17:41:15Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:15Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 08 18:01:27.647 E ns/openshift-sdn pod/ovs-7cdl7 node/ip-10-0-157-141.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error):  in the last 0 s (4 deletes)\n2020-07-08T17:58:50.484Z|00149|bridge|INFO|bridge br0: deleted interface vethd7bc832b on port 5\n2020-07-08T17:58:50.527Z|00150|connmgr|INFO|br0<->unix#666: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:58:50.572Z|00151|connmgr|INFO|br0<->unix#669: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:58:50.599Z|00152|bridge|INFO|bridge br0: deleted interface veth3672bbf4 on port 10\n2020-07-08T17:58:50.643Z|00153|connmgr|INFO|br0<->unix#672: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:58:50.682Z|00154|connmgr|INFO|br0<->unix#675: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:58:50.705Z|00155|bridge|INFO|bridge br0: deleted interface vethcb1bbab3 on port 11\n2020-07-08T17:58:50.744Z|00156|connmgr|INFO|br0<->unix#678: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:58:50.810Z|00157|connmgr|INFO|br0<->unix#681: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:58:50.834Z|00158|bridge|INFO|bridge br0: deleted interface vethefd253dd on port 3\n2020-07-08T17:58:50.880Z|00159|connmgr|INFO|br0<->unix#684: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:58:50.922Z|00160|connmgr|INFO|br0<->unix#687: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:58:50.953Z|00161|bridge|INFO|bridge br0: deleted interface veth60e1b825 on port 8\n2020-07-08T17:59:19.748Z|00162|connmgr|INFO|br0<->unix#711: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:59:19.775Z|00163|connmgr|INFO|br0<->unix#714: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:59:19.796Z|00164|bridge|INFO|bridge br0: deleted interface vethb8279308 on port 16\n2020-07-08T17:59:35.034Z|00165|connmgr|INFO|br0<->unix#729: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T17:59:35.061Z|00166|connmgr|INFO|br0<->unix#732: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T17:59:35.082Z|00167|bridge|INFO|bridge br0: deleted interface vethf4b53007 on port 4\n2020-07-08 17:59:41 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 08 18:01:27.682 E ns/openshift-multus pod/multus-qrdcm node/ip-10-0-157-141.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 08 18:01:27.683 E ns/openshift-machine-config-operator pod/machine-config-daemon-w9n7g node/ip-10-0-157-141.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 08 18:01:27.697 E ns/openshift-cluster-node-tuning-operator pod/tuned-fvck7 node/ip-10-0-157-141.us-west-2.compute.internal container=tuned container exited with code 143 (Error): .775108   96853 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:58:55.777083   96853 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:58:55.902437   96853 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 17:58:59.893742   96853 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-adapter-6764cf58b6-bw9d4) labels changed node wide: true\nI0708 17:59:00.775104   96853 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:59:00.776826   96853 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:59:00.889406   96853 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 17:59:29.895749   96853 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-6631/foo-qzh98) labels changed node wide: true\nI0708 17:59:30.775090   96853 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:59:30.776787   96853 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:59:30.887755   96853 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 17:59:39.888999   96853 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-1104/service-test-b65h2) labels changed node wide: true\nI0708 17:59:40.775111   96853 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 17:59:40.776890   96853 openshift-tuned.go:441] Getting recommended profile...\nI0708 17:59:40.888936   96853 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\n2020-07-08 17:59:41,070 INFO     tuned.daemon.controller: terminating controller\nI0708 17:59:41.071959   96853 openshift-tuned.go:137] Received signal: terminated\nI0708 17:59:41.071998   96853 openshift-tuned.go:304] Sending TERM to PID 97038\n
Jul 08 18:01:30.537 E ns/openshift-marketplace pod/redhat-operators-864c7d669c-89frz node/ip-10-0-128-226.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jul 08 18:01:34.082 E ns/openshift-multus pod/multus-qrdcm node/ip-10-0-157-141.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 08 18:01:34.852 E clusteroperator/monitoring changed Degraded to True: UpdatingAlertmanagerFailed: Failed to rollout the stack. Error: running task Updating Alertmanager failed: reconciling Alertmanager object failed: updating Alertmanager object failed: rpc error: code = Unavailable desc = etcdserver: leader changed
Jul 08 18:01:38.146 E ns/openshift-machine-config-operator pod/machine-config-daemon-w9n7g node/ip-10-0-157-141.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 08 18:01:41.565 E ns/openshift-marketplace pod/certified-operators-8774f5bd4-pkswc node/ip-10-0-128-226.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 08 18:01:58.899 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-128-226.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/08 17:40:41 Watching directory: "/etc/alertmanager/config"\n
Jul 08 18:01:58.899 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-128-226.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/08 17:40:41 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/08 17:40:41 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/08 17:40:41 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/08 17:40:41 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/08 17:40:41 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/08 17:40:41 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/08 17:40:41 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/08 17:40:41 http.go:106: HTTPS: listening on [::]:9095\n2020/07/08 17:46:08 reverseproxy.go:447: http: proxy error: context canceled\n2020/07/08 17:56:13 server.go:3012: http: TLS handshake error from 10.129.2.21:36534: EOF\n2020/07/08 17:56:36 oauthproxy.go:782: requestauth: 10.131.0.27:36866 Post https://172.30.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n2020/07/08 17:56:37 oauthproxy.go:782: requestauth: 10.131.0.27:36866 Post https://172.30.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n2020/07/08 17:56:42 oauthproxy.go:782: requestauth: 10.131.0.27:36866 Post https://172.30.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n2020/07/08 17:56:44 oauthproxy.go:782: requestauth: 10.129.2.23:35680 Post https://172.30.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n
Jul 08 18:01:58.999 E ns/openshift-monitoring pod/openshift-state-metrics-59fc9ffdc8-qh62c node/ip-10-0-128-226.us-west-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jul 08 18:01:59.013 E ns/openshift-monitoring pod/prometheus-adapter-6764cf58b6-wh57j node/ip-10-0-128-226.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0708 17:56:27.726861       1 adapter.go:93] successfully using in-cluster auth\nI0708 17:56:28.730547       1 secure_serving.go:116] Serving securely on [::]:6443\nE0708 18:00:28.228010       1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Node: Get https://172.30.0.1:443/api/v1/nodes?resourceVersion=35244&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n
Jul 08 18:01:59.039 E ns/openshift-monitoring pod/thanos-querier-7655498799-c66nf node/ip-10-0-128-226.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/07/08 17:40:55 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/08 17:40:55 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/08 17:40:55 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/08 17:40:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/08 17:40:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/08 17:40:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/08 17:40:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/08 17:40:55 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/08 17:40:55 http.go:106: HTTPS: listening on [::]:9091\n
Jul 08 18:02:17.397 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-157-141.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-07-08T18:02:15.501Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-08T18:02:15.508Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-08T18:02:15.509Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-08T18:02:15.516Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-08T18:02:15.516Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-07-08T18:02:15.516Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-08T18:02:15.516Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-08T18:02:15.517Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-08T18:02:15.517Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-08T18:02:15.517Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-08T18:02:15.517Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-08T18:02:15.517Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-08T18:02:15.517Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-08T18:02:15.517Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-07-08T18:02:15.518Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-08T18:02:15.518Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-07-08
Jul 08 18:02:47.525 E ns/openshift-monitoring pod/node-exporter-rp5mj node/ip-10-0-134-122.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-08T17:41:34Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:34Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 08 18:02:47.545 E ns/openshift-controller-manager pod/controller-manager-dpjmd node/ip-10-0-134-122.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 08 18:02:47.558 E ns/openshift-sdn pod/sdn-controller-mlmzf node/ip-10-0-134-122.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0708 17:45:03.684444       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0708 17:45:03.700880       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"595356f6-3824-425a-a0de-53d22699c839", ResourceVersion:"26931", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729825247, loc:(*time.Location)(0x2b7dcc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-134-122\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-07-08T17:14:07Z\",\"renewTime\":\"2020-07-08T17:45:03Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-134-122 became leader'\nI0708 17:45:03.700982       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0708 17:45:03.706482       1 master.go:51] Initializing SDN master\nI0708 17:45:03.742865       1 network_controller.go:60] Started OpenShift Network Controller\n
Jul 08 18:02:47.595 E ns/openshift-sdn pod/ovs-fbb6n node/ip-10-0-134-122.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): t 0 s (4 deletes)\n2020-07-08T18:00:04.047Z|00273|bridge|INFO|bridge br0: deleted interface veth64c027df on port 26\n2020-07-08T18:00:04.246Z|00274|connmgr|INFO|br0<->unix#988: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:00:04.450Z|00275|connmgr|INFO|br0<->unix#991: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:00:04.519Z|00276|bridge|INFO|bridge br0: deleted interface veth693dee71 on port 24\n2020-07-08T18:00:04.633Z|00277|connmgr|INFO|br0<->unix#994: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:00:04.722Z|00278|connmgr|INFO|br0<->unix#997: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:00:04.869Z|00279|bridge|INFO|bridge br0: deleted interface vetheb3744bc on port 32\n2020-07-08T18:00:04.951Z|00280|connmgr|INFO|br0<->unix#1000: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:00:05.011Z|00281|connmgr|INFO|br0<->unix#1003: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:00:05.081Z|00282|bridge|INFO|bridge br0: deleted interface veth356b6001 on port 9\n2020-07-08T18:00:05.307Z|00283|connmgr|INFO|br0<->unix#1006: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:00:05.410Z|00284|connmgr|INFO|br0<->unix#1009: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:00:05.472Z|00285|bridge|INFO|bridge br0: deleted interface vethfeebc04a on port 19\n2020-07-08T18:00:05.560Z|00286|connmgr|INFO|br0<->unix#1012: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:00:05.675Z|00287|connmgr|INFO|br0<->unix#1015: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:00:05.806Z|00288|bridge|INFO|bridge br0: deleted interface vethb743271f on port 30\n2020-07-08T18:00:25.764Z|00289|connmgr|INFO|br0<->unix#1033: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:00:25.789Z|00290|connmgr|INFO|br0<->unix#1036: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:00:25.809Z|00291|bridge|INFO|bridge br0: deleted interface vethacf0873c on port 14\n2020-07-08 18:00:28 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 08 18:02:47.635 E ns/openshift-multus pod/multus-admission-controller-wjrh9 node/ip-10-0-134-122.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Jul 08 18:02:47.650 E ns/openshift-multus pod/multus-njggw node/ip-10-0-134-122.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 08 18:02:47.684 E ns/openshift-machine-config-operator pod/machine-config-daemon-mshw5 node/ip-10-0-134-122.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 08 18:02:47.707 E ns/openshift-machine-config-operator pod/machine-config-server-4qfv6 node/ip-10-0-134-122.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0708 17:56:30.293907       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-2-g738d844d-dirty (738d844de27ff701ed022862cafda4431a6c074f)\nI0708 17:56:30.295084       1 api.go:56] Launching server on :22624\nI0708 17:56:30.295192       1 api.go:56] Launching server on :22623\n
Jul 08 18:02:47.722 E ns/openshift-cluster-node-tuning-operator pod/tuned-zv7gz node/ip-10-0-134-122.us-west-2.compute.internal container=tuned container exited with code 143 (Error): Getting recommended profile...\nI0708 18:00:01.674610  103633 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 18:00:01.682166  103633 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-5-ip-10-0-134-122.us-west-2.compute.internal) labels changed node wide: false\nI0708 18:00:01.687063  103633 openshift-tuned.go:550] Pod (openshift-cluster-version/cluster-version-operator-8db64fb4c-bnngv) labels changed node wide: true\nI0708 18:00:05.800148  103633 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 18:00:05.807662  103633 openshift-tuned.go:441] Getting recommended profile...\nI0708 18:00:06.164572  103633 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 18:00:06.235955  103633 openshift-tuned.go:550] Pod (openshift-dns-operator/dns-operator-549cd8fb98-2xdbv) labels changed node wide: true\nI0708 18:00:10.797134  103633 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 18:00:10.798494  103633 openshift-tuned.go:441] Getting recommended profile...\nI0708 18:00:10.911392  103633 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 18:00:19.174882  103633 openshift-tuned.go:550] Pod (openshift-service-ca/service-serving-cert-signer-77b49645dd-f46j2) labels changed node wide: true\nI0708 18:00:20.797135  103633 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 18:00:20.798490  103633 openshift-tuned.go:441] Getting recommended profile...\nI0708 18:00:20.898149  103633 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 18:00:27.282855  103633 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-57579b6f54-kplsr) labels changed node wide: true\n
Jul 08 18:02:47.796 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): red revision has been compacted\nE0708 18:00:27.910876       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.910893       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.910906       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.910916       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.910930       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.910908       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.911263       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.911351       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.911449       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.911458       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.911479       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.940756       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:00:27.940821       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0708 18:00:28.046825       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0708 18:00:28.046826       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-134-122.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\n
Jul 08 18:02:47.796 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0708 17:36:00.258805       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Jul 08 18:02:47.796 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0708 17:57:41.538144       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:57:41.538448       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0708 17:57:41.743436       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:57:41.743683       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Jul 08 18:02:47.823 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=cluster-policy-controller-8 container exited with code 1 (Error): I0708 17:37:49.430951       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0708 17:37:49.433836       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0708 17:37:49.434471       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Jul 08 18:02:47.823 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-8 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:59:10.797082       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:59:10.797498       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:59:20.806449       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:59:20.806713       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:59:30.814833       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:59:30.815083       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:59:40.822345       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:59:40.822824       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 17:59:50.831917       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 17:59:50.832153       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 18:00:00.876819       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 18:00:00.877131       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 18:00:10.885412       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 18:00:10.885757       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 18:00:20.893940       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 18:00:20.894567       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Jul 08 18:02:47.823 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=kube-controller-manager-8 container exited with code 2 (Error):  event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"community-operators-9f7d9f8f8", UID:"5a7cf1be-9e52-4191-bb4a-d6cc849ffd72", APIVersion:"apps/v1", ResourceVersion:"35271", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: community-operators-9f7d9f8f8-nrrqh\nI0708 18:00:22.352902       1 endpoints_controller.go:340] Error syncing endpoints for service "openshift-marketplace/community-operators", retrying. Error: endpoints "community-operators" already exists\nI0708 18:00:22.353013       1 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"community-operators", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FailedToCreateEndpoint' Failed to create endpoint for service openshift-marketplace/community-operators: endpoints "community-operators" already exists\nI0708 18:00:22.456146       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-marketplace", Name:"redhat-operators", UID:"fdea0fde-08ab-4012-adea-e09fc3f2a2e6", APIVersion:"apps/v1", ResourceVersion:"35291", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redhat-operators-55f9d86b69 to 1\nI0708 18:00:22.456531       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-marketplace/redhat-operators-55f9d86b69, need 1, creating 1\nI0708 18:00:22.466718       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/redhat-operators: Operation cannot be fulfilled on deployments.apps "redhat-operators": the object has been modified; please apply your changes to the latest version and try again\nI0708 18:00:22.485770       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"redhat-operators-55f9d86b69", UID:"9c76f9e8-8874-49e2-a759-55dbc58ed295", APIVersion:"apps/v1", ResourceVersion:"35293", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redhat-operators-55f9d86b69-b4jp9\n
Jul 08 18:02:47.843 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-122.us-west-2.compute.internal node/ip-10-0-134-122.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): atable: CPU<3500m>|Memory<14795332Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0708 18:00:16.643145       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-5c7f684b5-w6tmd: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0708 18:00:22.175558       1 scheduler.go:667] pod openshift-marketplace/certified-operators-849d89b96d-jlkjr is bound successfully on node "ip-10-0-143-89.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15268400Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0708 18:00:22.346377       1 scheduler.go:667] pod openshift-marketplace/community-operators-9f7d9f8f8-nrrqh is bound successfully on node "ip-10-0-143-89.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15268400Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0708 18:00:22.497430       1 scheduler.go:667] pod openshift-marketplace/redhat-operators-55f9d86b69-b4jp9 is bound successfully on node "ip-10-0-143-89.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15268400Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0708 18:00:25.060703       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-5c7f684b5-w6tmd: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Jul 08 18:02:55.481 E ns/openshift-multus pod/multus-njggw node/ip-10-0-134-122.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 08 18:02:58.729 E ns/openshift-machine-config-operator pod/machine-config-daemon-mshw5 node/ip-10-0-134-122.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 08 18:02:59.787 E ns/openshift-multus pod/multus-njggw node/ip-10-0-134-122.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 08 18:03:08.762 E ns/openshift-machine-api pod/machine-api-controllers-7689d5dd9-rbtrr node/ip-10-0-156-186.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 08 18:03:13.283 E ns/openshift-monitoring pod/thanos-querier-7655498799-d2npt node/ip-10-0-156-186.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/07/08 17:40:41 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/08 17:40:41 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/08 17:40:41 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/08 17:40:41 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/08 17:40:41 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/08 17:40:41 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/08 17:40:41 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/08 17:40:41 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/08 17:40:41 http.go:106: HTTPS: listening on [::]:9091\n
Jul 08 18:04:59.713 E ns/openshift-monitoring pod/node-exporter-mx2xl node/ip-10-0-128-226.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-08T17:41:57Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-08T17:41:57Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 08 18:04:59.760 E ns/openshift-sdn pod/ovs-kjxtw node/ip-10-0-128-226.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): x#913: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:01:58.425Z|00195|connmgr|INFO|br0<->unix#916: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:01:58.464Z|00196|bridge|INFO|bridge br0: deleted interface vethce425244 on port 4\n2020-07-08T18:01:58.503Z|00197|connmgr|INFO|br0<->unix#919: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:01:58.536Z|00198|connmgr|INFO|br0<->unix#922: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:01:58.561Z|00199|bridge|INFO|bridge br0: deleted interface vethaf622cb3 on port 17\n2020-07-08T18:01:58.597Z|00200|connmgr|INFO|br0<->unix#925: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:01:58.653Z|00201|connmgr|INFO|br0<->unix#928: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:01:58.679Z|00202|bridge|INFO|bridge br0: deleted interface vethb9426ae4 on port 3\n2020-07-08T18:01:58.715Z|00203|connmgr|INFO|br0<->unix#931: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:01:58.748Z|00204|connmgr|INFO|br0<->unix#934: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:01:58.774Z|00205|bridge|INFO|bridge br0: deleted interface veth1740d03b on port 23\n2020-07-08T18:02:26.700Z|00206|connmgr|INFO|br0<->unix#958: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:02:26.728Z|00207|connmgr|INFO|br0<->unix#961: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:02:26.750Z|00208|bridge|INFO|bridge br0: deleted interface veth1a3ff66f on port 22\n2020-07-08T18:02:42.124Z|00021|jsonrpc|WARN|unix#870: receive error: Connection reset by peer\n2020-07-08T18:02:42.124Z|00022|reconnect|WARN|unix#870: connection dropped (Connection reset by peer)\n2020-07-08T18:02:42.085Z|00209|connmgr|INFO|br0<->unix#976: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:02:42.113Z|00210|connmgr|INFO|br0<->unix#979: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:02:42.135Z|00211|bridge|INFO|bridge br0: deleted interface vethe0a86360 on port 11\n info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 08 18:04:59.796 E ns/openshift-multus pod/multus-kfvzx node/ip-10-0-128-226.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 08 18:04:59.802 E ns/openshift-machine-config-operator pod/machine-config-daemon-wtfvq node/ip-10-0-128-226.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 08 18:04:59.811 E ns/openshift-cluster-node-tuning-operator pod/tuned-dn5vs node/ip-10-0-128-226.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 2e-k8s-sig-apps-replicaset-upgrade-133/rs-6fm8n) labels changed node wide: true\nI0708 18:02:05.917372  146276 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 18:02:05.919166  146276 openshift-tuned.go:441] Getting recommended profile...\nI0708 18:02:06.037339  146276 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 18:02:15.281774  146276 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-7655498799-c66nf) labels changed node wide: true\nI0708 18:02:15.911017  146276 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 18:02:15.912620  146276 openshift-tuned.go:441] Getting recommended profile...\nI0708 18:02:16.028581  146276 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 18:02:28.003902  146276 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-6631/foo-xhwf8) labels changed node wide: true\nI0708 18:02:30.911018  146276 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 18:02:30.912746  146276 openshift-tuned.go:441] Getting recommended profile...\nI0708 18:02:31.027679  146276 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 18:02:44.031738  146276 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-1104/service-test-hpclv) labels changed node wide: true\nI0708 18:02:45.910996  146276 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 18:02:45.913065  146276 openshift-tuned.go:441] Getting recommended profile...\nI0708 18:02:46.030034  146276 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0708 18:03:15.288039  146276 openshift-tuned.go:550] Pod (openshift-monitoring/openshift-state-metrics-59fc9ffdc8-qh62c) labels changed node wide: true\n
Jul 08 18:05:05.387 E ns/openshift-multus pod/multus-kfvzx node/ip-10-0-128-226.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 08 18:05:08.420 E ns/openshift-machine-config-operator pod/machine-config-daemon-wtfvq node/ip-10-0-128-226.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 08 18:05:15.070 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:05:45.070 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 08 18:05:56.477 E ns/openshift-marketplace pod/redhat-operators-55f9d86b69-b4jp9 node/ip-10-0-143-89.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jul 08 18:06:07.431 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=cluster-policy-controller-8 container exited with code 1 (Error): rted: 2020-07-08 18:00:28.944972072 +0000 UTC m=+1201.626765025) (total time: 30.001439474s):\nTrace[608570299]: [30.001439474s] [30.001439474s] END\nE0708 18:00:58.946462       1 reflector.go:126] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io)\nI0708 18:00:59.873647       1 trace.go:81] Trace[577054499]: "Reflector github.com/openshift/client-go/build/informers/externalversions/factory.go:101 ListAndWatch" (started: 2020-07-08 18:00:29.871984559 +0000 UTC m=+1202.553777467) (total time: 30.001639584s):\nTrace[577054499]: [30.001639584s] [30.001639584s] END\nE0708 18:00:59.873667       1 reflector.go:126] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0708 18:01:16.694588       1 reflector.go:270] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io)\nI0708 18:01:16.694637       1 trace.go:81] Trace[1739822479]: "Reflector github.com/openshift/client-go/build/informers/externalversions/factory.go:101 ListAndWatch" (started: 2020-07-08 18:01:00.873837109 +0000 UTC m=+1233.555630106) (total time: 15.820776368s):\nTrace[1739822479]: [15.820776368s] [15.820776368s] END\nE0708 18:01:16.695094       1 reflector.go:126] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0708 18:01:19.766776       1 reflector.go:126] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\n
Jul 08 18:06:07.431 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=kube-controller-manager-8 container exited with code 2 (Error):  modified; please apply your changes to the latest version and try again\nI0708 18:03:29.783501       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/thanos-querier: Operation cannot be fulfilled on deployments.apps "thanos-querier": the object has been modified; please apply your changes to the latest version and try again\nI0708 18:03:35.581417       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/grafana: Operation cannot be fulfilled on deployments.apps "grafana": the object has been modified; please apply your changes to the latest version and try again\nI0708 18:03:35.694694       1 replica_set.go:607] Too many replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-64f65c5df5, need 0, deleting 1\nI0708 18:03:35.694803       1 replica_set.go:225] Found 8 related ReplicaSets for ReplicaSet openshift-operator-lifecycle-manager/packageserver-64f65c5df5: packageserver-b857c495b, packageserver-64f65c5df5, packageserver-6bfbc6749, packageserver-6889f5899b, packageserver-5bc97ff4b, packageserver-85ddd5dbdd, packageserver-85c699c955, packageserver-7f8bc6b8c7\nI0708 18:03:35.694940       1 controller_utils.go:602] Controller packageserver-64f65c5df5 deleting pod openshift-operator-lifecycle-manager/packageserver-64f65c5df5-lhvbt\nI0708 18:03:35.695021       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"af0090d6-892a-4e11-8b09-8edef97f7cde", APIVersion:"apps/v1", ResourceVersion:"38366", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-64f65c5df5 to 0\nI0708 18:03:35.705604       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-64f65c5df5", UID:"8ca55673-1ea3-4d29-a500-d30c3f3c5df5", APIVersion:"apps/v1", ResourceVersion:"38481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-64f65c5df5-lhvbt\n
Jul 08 18:06:07.431 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-8 container exited with code 2 (Error): -ca-bundle true}]\nI0708 18:02:46.210743       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 18:02:56.219589       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 18:02:56.219888       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 18:03:06.229928       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 18:03:06.230281       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 18:03:16.283162       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 18:03:16.283627       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 18:03:26.290894       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 18:03:26.291176       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0708 18:03:36.307495       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 18:03:36.307772       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nE0708 18:03:43.280582       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=36836&timeout=7m8s&timeoutSeconds=428&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0708 18:03:43.280676       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=38500&timeout=6m58s&timeoutSeconds=418&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Jul 08 18:06:07.501 E ns/openshift-controller-manager pod/controller-manager-t2j72 node/ip-10-0-156-186.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 08 18:06:07.516 E ns/openshift-monitoring pod/node-exporter-s56lh node/ip-10-0-156-186.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 7-08T17:42:17Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-08T17:42:17Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 08 18:06:07.523 E ns/openshift-sdn pod/sdn-controller-wm6r2 node/ip-10-0-156-186.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0708 17:44:57.231474       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Jul 08 18:06:07.532 E ns/openshift-sdn pod/ovs-ww2sx node/ip-10-0-156-186.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): s)\n2020-07-08T18:03:12.942Z|00294|connmgr|INFO|br0<->unix#1190: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:03:12.973Z|00295|bridge|INFO|bridge br0: deleted interface veth59968292 on port 35\n2020-07-08T18:03:13.736Z|00296|bridge|INFO|bridge br0: added interface veth66e2e115 on port 37\n2020-07-08T18:03:13.763Z|00297|connmgr|INFO|br0<->unix#1194: 5 flow_mods in the last 0 s (5 adds)\n2020-07-08T18:03:13.832Z|00298|connmgr|INFO|br0<->unix#1198: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-07-08T18:03:13.837Z|00299|connmgr|INFO|br0<->unix#1200: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:03:14.635Z|00300|bridge|INFO|bridge br0: added interface vethbe959eaf on port 38\n2020-07-08T18:03:14.662Z|00301|connmgr|INFO|br0<->unix#1203: 5 flow_mods in the last 0 s (5 adds)\n2020-07-08T18:03:14.704Z|00302|connmgr|INFO|br0<->unix#1207: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:03:14.707Z|00303|connmgr|INFO|br0<->unix#1209: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-07-08T18:03:16.325Z|00304|connmgr|INFO|br0<->unix#1212: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:03:16.373Z|00305|connmgr|INFO|br0<->unix#1216: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:03:16.410Z|00306|bridge|INFO|bridge br0: deleted interface veth66e2e115 on port 37\n2020-07-08T18:03:17.221Z|00307|connmgr|INFO|br0<->unix#1220: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:03:17.247Z|00308|connmgr|INFO|br0<->unix#1223: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:03:17.268Z|00309|bridge|INFO|bridge br0: deleted interface vethbe959eaf on port 38\n2020-07-08T18:03:29.392Z|00310|connmgr|INFO|br0<->unix#1233: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-08T18:03:29.417Z|00311|connmgr|INFO|br0<->unix#1236: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-08T18:03:29.438Z|00312|bridge|INFO|bridge br0: deleted interface veth029b6649 on port 36\n2020-07-08 18:03:43 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Jul 08 18:06:07.552 E ns/openshift-multus pod/multus-admission-controller-t7zb8 node/ip-10-0-156-186.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 08 18:06:07.599 E ns/openshift-multus pod/multus-6685s node/ip-10-0-156-186.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Jul 08 18:06:07.636 E ns/openshift-machine-config-operator pod/machine-config-daemon-58q28 node/ip-10-0-156-186.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 08 18:06:07.660 E ns/openshift-machine-config-operator pod/machine-config-server-x2pqj node/ip-10-0-156-186.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0708 17:56:12.829560       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-2-g738d844d-dirty (738d844de27ff701ed022862cafda4431a6c074f)\nI0708 17:56:12.830433       1 api.go:56] Launching server on :22624\nI0708 17:56:12.830541       1 api.go:56] Launching server on :22623\n
Jul 08 18:06:07.675 E ns/openshift-cluster-node-tuning-operator pod/tuned-tnfxl node/ip-10-0-156-186.us-west-2.compute.internal container=tuned container exited with code 143 (Error):    tuned.daemon.application: dynamic tuning is globally disabled\n2020-07-08 18:03:27,783 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-07-08 18:03:27,783 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-07-08 18:03:27,785 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-07-08 18:03:27,787 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-07-08 18:03:27,836 INFO     tuned.daemon.controller: starting controller\n2020-07-08 18:03:27,837 INFO     tuned.daemon.daemon: starting tuning\n2020-07-08 18:03:27,843 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-07-08 18:03:27,844 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-07-08 18:03:27,849 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-07-08 18:03:27,850 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-07-08 18:03:27,851 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-07-08 18:03:27,914 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-07-08 18:03:27,916 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0708 18:03:42.030297  127693 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-769955cdb7-wg7hr) labels changed node wide: true\nI0708 18:03:42.547470  127693 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0708 18:03:42.548616  127693 openshift-tuned.go:441] Getting recommended profile...\nI0708 18:03:42.649013  127693 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0708 18:03:43.123470  127693 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-156-186.us-west-2.compute.internal) labels changed node wide: true\n
Jul 08 18:06:12.554 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error):  required revision has been compacted\nE0708 18:03:43.079882       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:03:43.079918       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:03:43.079999       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:03:43.080018       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:03:43.080040       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:03:43.080062       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:03:43.080105       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0708 18:03:43.080115       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0708 18:03:43.113971       1 cacher.go:771] cacher (*core.Pod): 2 objects queued in incoming channel.\nI0708 18:03:43.190852       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0708 18:03:43.190802       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-156-186.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nW0708 18:03:43.199468       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.128.27 10.0.134.122]\nI0708 18:03:43.204242       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-156-186.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n
Jul 08 18:06:12.554 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0708 17:40:28.993378       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Jul 08 18:06:12.554 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0708 18:02:10.599499       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 18:02:10.599799       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0708 18:02:10.806251       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0708 18:02:10.807375       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Jul 08 18:06:12.580 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-156-186.us-west-2.compute.internal node/ip-10-0-156-186.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): -0-128-27.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16118340Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<14967364Ki>|Pods<250>|StorageEphemeral<114381692328>.".\nI0708 18:03:28.903955       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-5c7f684b5-4rltv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0708 18:03:28.931675       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-5c7f684b5-4rltv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0708 18:03:30.487731       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-5c7f684b5-4rltv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0708 18:03:32.932063       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-5c7f684b5-4rltv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0708 18:03:37.488590       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-5c7f684b5-4rltv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Jul 08 18:06:14.790 E ns/openshift-multus pod/multus-6685s node/ip-10-0-156-186.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 08 18:06:17.882 E ns/openshift-machine-config-operator pod/machine-config-daemon-58q28 node/ip-10-0-156-186.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 08 18:08:09.847 E clusteroperator/authentication changed Degraded to True: RouteStatusDegradedFailedCreate: RouteStatusDegraded: the server is currently unable to handle the request (get routes.route.openshift.io oauth-openshift)