ResultSUCCESS
Tests 4 failed / 21 succeeded
Started2020-08-06 23:59
Elapsed1h51m
Work namespaceci-op-320tg6yx
Refs release-4.3:a9548e6d
63:b7b4fa48
podd26026c0-d840-11ea-84b9-0a580a820729
repoopenshift/cluster-storage-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 35m52s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 4s of 32m30s (0%):

Aug 07 01:20:20.258 E ns/e2e-k8s-service-lb-available-4383 svc/service-test Service stopped responding to GET requests on reused connections
Aug 07 01:20:20.310 I ns/e2e-k8s-service-lb-available-4383 svc/service-test Service started responding to GET requests on reused connections
Aug 07 01:21:28.258 E ns/e2e-k8s-service-lb-available-4383 svc/service-test Service stopped responding to GET requests on reused connections
Aug 07 01:21:28.325 I ns/e2e-k8s-service-lb-available-4383 svc/service-test Service started responding to GET requests on reused connections
Aug 07 01:21:54.258 E ns/e2e-k8s-service-lb-available-4383 svc/service-test Service stopped responding to GET requests on reused connections
Aug 07 01:21:54.325 I ns/e2e-k8s-service-lb-available-4383 svc/service-test Service started responding to GET requests on reused connections
Aug 07 01:22:21.258 E ns/e2e-k8s-service-lb-available-4383 svc/service-test Service stopped responding to GET requests over new connections
Aug 07 01:22:21.341 I ns/e2e-k8s-service-lb-available-4383 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1596764530.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 35m22s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 3m41s of 35m21s (10%):

Aug 07 01:18:11.912 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:18:11.912 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:18:12.031 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:18:12.031 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:20:19.911 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:20:19.912 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:20:19.912 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Aug 07 01:20:20.017 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Aug 07 01:20:20.911 - 4s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:20:20.911 - 5s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 07 01:20:22.911 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:20:23.911 - 2s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 01:20:26.546 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:20:27.070 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:20:27.255 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:20:27.396 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:20:27.911 - 1s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:20:27.990 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:20:28.911 E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 01:20:29.020 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:20:29.044 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:20:30.027 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:20:30.911 E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:20:30.968 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:20:31.048 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:20:31.911 - 1s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 01:20:33.017 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:20:44.911 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:20:44.912 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Aug 07 01:20:45.911 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Aug 07 01:20:45.911 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 01:20:55.027 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:20:55.030 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Aug 07 01:21:27.912 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:21:28.911 - 2s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 07 01:21:28.912 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:21:29.911 - 1s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:21:31.115 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:21:32.141 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:22:21.912 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:22:22.020 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:30:27.912 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:30:28.911 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 01:30:31.913 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:30:32.911 - 9s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:30:38.023 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:30:42.024 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:30:43.911 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:30:44.063 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:30:48.911 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:30:49.022 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:30:52.912 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:30:53.911 - 9s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:30:58.912 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:30:59.045 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:31:00.911 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:31:01.107 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:31:03.034 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:31:13.912 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:31:14.056 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:33:22.019 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:33:22.911 - 7s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:33:31.067 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:33:38.911 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:33:39.097 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:33:40.046 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:33:40.911 - 7s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 01:33:41.067 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:33:41.911 - 7s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:33:49.034 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:33:50.055 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:33:59.035 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:33:59.911 - 1s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 01:34:00.056 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:34:00.911 E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:34:01.117 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:34:01.122 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:36:03.000 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:36:03.000 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:36:03.911 - 19s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Aug 07 01:36:03.911 - 24s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:36:13.911 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:36:13.911 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Aug 07 01:36:14.911 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Aug 07 01:36:14.911 - 14s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 07 01:36:24.043 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Aug 07 01:36:24.044 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:36:29.038 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:36:29.042 I ns/openshift-console route/console Route started responding to GET requests over new connections
Aug 07 01:36:34.911 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Aug 07 01:36:35.030 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Aug 07 01:36:39.042 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Aug 07 01:36:39.911 - 15s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Aug 07 01:36:44.911 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Aug 07 01:36:45.911 - 9s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Aug 07 01:36:55.043 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Aug 07 01:36:55.276 I ns/openshift-console route/console Route started responding to GET requests over new connections
				from junit_upgrade_1596764530.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 35m22s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1m20s of 35m21s (4%):

Aug 07 01:20:49.839 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-320tg6yx-9e0a9.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Aug 07 01:20:50.839 - 12s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:21:03.937 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:30:37.920 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:30:38.839 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:30:38.868 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:30:54.840 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-320tg6yx-9e0a9.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded
Aug 07 01:30:54.867 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:31:11.842 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-320tg6yx-9e0a9.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Aug 07 01:31:11.871 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:33:45.280 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:33:45.839 - 30s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:34:15.871 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:34:25.968 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:34:26.839 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:34:32.143 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:34:35.184 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:34:35.215 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:34:38.256 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:34:38.288 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:34:44.400 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:34:44.839 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:34:44.869 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:34:47.472 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:34:47.839 E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:34:47.869 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:34:50.544 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:34:50.839 - 11s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:35:03.500 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:35:05.731 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:35:05.777 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:35:08.798 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:35:08.829 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:35:11.871 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:35:11.902 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:35:14.942 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:35:14.983 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:35:21.086 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:35:21.117 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:35:24.163 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:35:24.839 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:35:27.259 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:35:33.373 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:35:33.412 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:35:36.447 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:35:36.478 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:35:42.591 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Aug 07 01:35:42.839 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:35:45.699 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:37:06.839 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-320tg6yx-9e0a9.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Aug 07 01:37:06.871 I openshift-apiserver OpenShift API started responding to GET requests
Aug 07 01:37:22.839 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-320tg6yx-9e0a9.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Aug 07 01:37:23.839 - 13s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:37:37.883 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1596764530.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 35m55s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
155 error level events were detected during this test run:

Aug 07 01:09:17.510 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 508)
Aug 07 01:11:00.735 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7fb996db45-c6dpv node/ip-10-0-147-237.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 15305 (15709)\nW0807 01:04:36.505936       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14432 (15709)\nW0807 01:04:36.506051       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 15633 (16360)\nW0807 01:04:36.516893       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 15577 (16358)\nW0807 01:04:36.524426       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 15407 (15712)\nW0807 01:04:36.568815       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 13686 (15711)\nW0807 01:05:43.128818       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 17968 (18057)\nW0807 01:09:09.629260       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19531 (19537)\nW0807 01:09:18.399006       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19537 (19617)\nW0807 01:09:31.241569       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19617 (19710)\nI0807 01:10:59.793096       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 01:10:59.793176       1 leaderelection.go:66] leaderelection lost\n
Aug 07 01:11:31.344 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-79.us-east-2.compute.internal node/ip-10-0-137-79.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :11:30.672876       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 01:11:30.672885       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 01:11:30.672894       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 01:11:30.672902       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 01:11:30.672908       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 01:11:30.672913       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 01:11:30.672919       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 01:11:30.672924       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 01:11:30.672929       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 01:11:30.672935       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 01:11:30.672964       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 01:11:30.672977       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 01:11:30.672983       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 01:11:30.672989       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 01:11:30.673019       1 server.go:692] external host was not specified, using 10.0.137.79\nI0807 01:11:30.673184       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 01:11:30.673440       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 01:11:53.436 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-79.us-east-2.compute.internal node/ip-10-0-137-79.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :11:53.252508       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 01:11:53.252518       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 01:11:53.252528       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 01:11:53.252538       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 01:11:53.252547       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 01:11:53.252556       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 01:11:53.252565       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 01:11:53.252574       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 01:11:53.252583       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 01:11:53.252593       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 01:11:53.252606       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 01:11:53.252617       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 01:11:53.252628       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 01:11:53.252639       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 01:11:53.252682       1 server.go:692] external host was not specified, using 10.0.137.79\nI0807 01:11:53.252832       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 01:11:53.253076       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 01:12:26.518 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-79.us-east-2.compute.internal node/ip-10-0-137-79.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): :12:26.205974       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 01:12:26.205981       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 01:12:26.205986       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 01:12:26.205991       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 01:12:26.205997       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 01:12:26.206002       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 01:12:26.206008       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 01:12:26.206013       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 01:12:26.206018       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 01:12:26.206023       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 01:12:26.206031       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 01:12:26.206038       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 01:12:26.206044       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 01:12:26.206051       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 01:12:26.206078       1 server.go:692] external host was not specified, using 10.0.137.79\nI0807 01:12:26.206233       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 01:12:26.206461       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 01:12:30.095 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6998db7855-mjlsl node/ip-10-0-147-237.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 15633 (17722)\nW0807 01:04:37.180537       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 15599 (17723)\nW0807 01:04:37.235808       1 reflector.go:299] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: watch of *unstructured.Unstructured ended with: too old resource version: 16951 (17723)\nW0807 01:04:37.715442       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 16951 (17723)\nW0807 01:04:37.771037       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.APIServer ended with: too old resource version: 15611 (17722)\nW0807 01:05:43.071979       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 17968 (18057)\nW0807 01:09:09.394576       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19531 (19534)\nW0807 01:09:18.410264       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19534 (19617)\nW0807 01:09:31.278542       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 19617 (19712)\nI0807 01:12:29.561611       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 01:12:29.561660       1 leaderelection.go:66] leaderelection lost\nF0807 01:12:29.563725       1 builder.go:217] server exited\n
Aug 07 01:12:46.716 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-137-79.us-east-2.compute.internal node/ip-10-0-137-79.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): marks=true&resourceVersion=17106&timeout=6m46s&timeoutSeconds=406&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:12:46.279410       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=17106&timeout=5m16s&timeoutSeconds=316&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:12:46.280485       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=20847&timeoutSeconds=384&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:12:46.282546       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=18844&timeout=7m45s&timeoutSeconds=465&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:12:46.283769       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=17101&timeout=5m19s&timeoutSeconds=319&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:12:46.284776       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=18861&timeout=5m1s&timeoutSeconds=301&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0807 01:12:46.447857       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0807 01:12:46.447891       1 server.go:264] leaderelection lost\n
Aug 07 01:12:56.109 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=cluster-policy-controller-7 container exited with code 255 (Error): I0807 01:12:55.804803       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 01:12:55.806626       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 01:12:55.806684       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0807 01:12:55.807464       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 07 01:13:14.170 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=cluster-policy-controller-7 container exited with code 255 (Error): I0807 01:13:13.833398       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 01:13:13.835579       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 01:13:13.835631       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0807 01:13:13.836335       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 07 01:13:52.376 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 13:51.679542       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 01:13:51.679553       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 01:13:51.679562       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 01:13:51.679572       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 01:13:51.679582       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 01:13:51.679591       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 01:13:51.679601       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 01:13:51.679610       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 01:13:51.679620       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 01:13:51.679630       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 01:13:51.679643       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 01:13:51.679655       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 01:13:51.679667       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 01:13:51.679682       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 01:13:51.679725       1 server.go:692] external host was not specified, using 10.0.147.237\nI0807 01:13:51.679867       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 01:13:51.680099       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 01:14:14.089 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-79.us-east-2.compute.internal node/ip-10-0-137-79.us-east-2.compute.internal container=cluster-policy-controller-7 container exited with code 255 (Error): I0807 01:14:13.152508       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 01:14:13.161005       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 01:14:13.161075       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0807 01:14:13.161766       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 07 01:14:16.475 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 14:16.291277       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 01:14:16.291285       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 01:14:16.291293       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 01:14:16.291302       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 01:14:16.291311       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 01:14:16.291318       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 01:14:16.291326       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 01:14:16.291357       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 01:14:16.291366       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 01:14:16.291375       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 01:14:16.291388       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 01:14:16.291399       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 01:14:16.291409       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 01:14:16.291417       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 01:14:16.291458       1 server.go:692] external host was not specified, using 10.0.147.237\nI0807 01:14:16.291634       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 01:14:16.291867       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 01:14:32.172 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-79.us-east-2.compute.internal node/ip-10-0-137-79.us-east-2.compute.internal container=cluster-policy-controller-7 container exited with code 255 (Error): I0807 01:14:31.159923       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 01:14:31.162820       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 01:14:31.162934       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0807 01:14:31.163858       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 07 01:14:50.585 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 14:49.794439       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 01:14:49.794445       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 01:14:49.794450       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 01:14:49.794456       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 01:14:49.794462       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 01:14:49.794467       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 01:14:49.794472       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 01:14:49.794477       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 01:14:49.794482       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 01:14:49.794488       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 01:14:49.794496       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 01:14:49.794503       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 01:14:49.794513       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 01:14:49.794520       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 01:14:49.794548       1 server.go:692] external host was not specified, using 10.0.147.237\nI0807 01:14:49.794718       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 01:14:49.794979       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 01:14:53.556 E ns/openshift-machine-api pod/machine-api-controllers-85bd7bc896-drg8w node/ip-10-0-134-8.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Aug 07 01:15:09.718 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=kube-controller-manager-6 container exited with code 255 (Error): 978440       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/kubestorageversionmigrators?allowWatchBookmarks=true&resourceVersion=21208&timeout=7m42s&timeoutSeconds=462&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:15:08.979661       1 reflector.go:280] github.com/openshift/client-go/authorization/informers/externalversions/factory.go:101: Failed to watch *v1.RoleBindingRestriction: Get https://localhost:6443/apis/authorization.openshift.io/v1/rolebindingrestrictions?allowWatchBookmarks=true&resourceVersion=21208&timeout=6m8s&timeoutSeconds=368&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:15:08.980872       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Lease: Get https://localhost:6443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=22187&timeout=9m27s&timeoutSeconds=567&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:15:08.981981       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/migration.k8s.io/v1alpha1/storageversionmigrations?allowWatchBookmarks=true&resourceVersion=21260&timeout=7m46s&timeoutSeconds=466&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:15:08.983103       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/infrastructures?allowWatchBookmarks=true&resourceVersion=20102&timeout=7m50s&timeoutSeconds=470&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0807 01:15:09.222082       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0807 01:15:09.222182       1 controllermanager.go:291] leaderelection lost\n
Aug 07 01:15:12.701 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (159 of 508)\n* Could not update deployment "openshift-cloud-credential-operator/cloud-credential-operator" (142 of 508)\n* Could not update deployment "openshift-cluster-machine-approver/machine-approver" (223 of 508)\n* Could not update deployment "openshift-cluster-node-tuning-operator/cluster-node-tuning-operator" (229 of 508)\n* Could not update deployment "openshift-cluster-samples-operator/cluster-samples-operator" (256 of 508)\n* Could not update deployment "openshift-cluster-storage-operator/cluster-storage-operator" (274 of 508)\n* Could not update deployment "openshift-console/downloads" (326 of 508)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (240 of 508)\n* Could not update deployment "openshift-image-registry/cluster-image-registry-operator" (197 of 508)\n* Could not update deployment "openshift-ingress-operator/ingress-operator" (213 of 508)\n* Could not update deployment "openshift-insights/insights-operator" (350 of 508)\n* Could not update deployment "openshift-machine-api/cluster-autoscaler-operator" (180 of 508)\n* Could not update deployment "openshift-monitoring/cluster-monitoring-operator" (300 of 508)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (364 of 508)\n* Could not update deployment "openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator" (284 of 508)\n* Could not update deployment "openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator" (293 of 508)
Aug 07 01:15:16.858 E ns/openshift-insights pod/insights-operator-8497d45947-gm6d6 node/ip-10-0-147-237.us-east-2.compute.internal container=operator container exited with code 2 (Error):  01:13:06.284374       1 diskrecorder.go:63] Recording config/oauth with fingerprint=\nI0807 01:13:06.287303       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0807 01:13:06.290411       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0807 01:13:06.296835       1 diskrecorder.go:170] Writing 46 records to /var/lib/insights-operator/insights-2020-08-07-011306.tar.gz\nI0807 01:13:06.299455       1 diskrecorder.go:134] Wrote 46 records to disk in 2ms\nI0807 01:13:06.299478       1 periodic.go:151] Periodic gather config completed in 82ms\nI0807 01:13:15.421230       1 httplog.go:90] GET /metrics: (7.52607ms) 200 [Prometheus/2.14.0 10.131.0.19:45004]\nI0807 01:13:15.829811       1 configobserver.go:68] Refreshing configuration from cluster pull secret\nI0807 01:13:15.837639       1 configobserver.go:93] Found cloud.openshift.com token\nI0807 01:13:15.837669       1 configobserver.go:110] Refreshing configuration from cluster secret\nI0807 01:13:27.902947       1 httplog.go:90] GET /metrics: (7.133051ms) 200 [Prometheus/2.14.0 10.129.2.13:56724]\nI0807 01:13:45.422257       1 httplog.go:90] GET /metrics: (8.596402ms) 200 [Prometheus/2.14.0 10.131.0.19:45004]\nI0807 01:13:57.902865       1 httplog.go:90] GET /metrics: (7.109041ms) 200 [Prometheus/2.14.0 10.129.2.13:56724]\nI0807 01:14:15.423733       1 httplog.go:90] GET /metrics: (10.036298ms) 200 [Prometheus/2.14.0 10.131.0.19:45004]\nI0807 01:14:15.811401       1 status.go:298] The operator is healthy\nI0807 01:14:15.811483       1 status.go:373] No status update necessary, objects are identical\nI0807 01:14:27.903735       1 httplog.go:90] GET /metrics: (7.987189ms) 200 [Prometheus/2.14.0 10.129.2.13:56724]\nI0807 01:14:45.421240       1 httplog.go:90] GET /metrics: (7.48674ms) 200 [Prometheus/2.14.0 10.131.0.19:45004]\nI0807 01:14:57.905130       1 httplog.go:90] GET /metrics: (9.14971ms) 200 [Prometheus/2.14.0 10.129.2.13:56724]\nI0807 01:15:15.425593       1 httplog.go:90] GET /metrics: (11.826316ms) 200 [Prometheus/2.14.0 10.131.0.19:45004]\n
Aug 07 01:15:29.970 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=cluster-policy-controller-7 container exited with code 255 (Error): I0807 01:15:28.808205       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 01:15:28.810536       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 01:15:28.810640       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0807 01:15:28.811339       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Aug 07 01:15:48.081 E ns/openshift-controller-manager pod/controller-manager-pwf49 node/ip-10-0-147-237.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Aug 07 01:16:08.669 E ns/openshift-monitoring pod/openshift-state-metrics-7d5c9c955b-tjttg node/ip-10-0-132-131.us-east-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Aug 07 01:16:08.744 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-132-131.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/07 01:00:26 Watching directory: "/etc/alertmanager/config"\n
Aug 07 01:16:08.744 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-132-131.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/07 01:00:27 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/07 01:00:27 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 01:00:27 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 01:00:27 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/07 01:00:27 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 01:00:27 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/07 01:00:27 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 01:00:27 http.go:106: HTTPS: listening on [::]:9095\n
Aug 07 01:16:15.725 E ns/openshift-cluster-node-tuning-operator pod/tuned-bzxvv node/ip-10-0-132-131.us-east-2.compute.internal container=tuned container exited with code 143 (Error): true\nI0807 01:01:56.947869    2348 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:01:56.949144    2348 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:01:57.059072    2348 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 01:02:15.937561    2348 openshift-tuned.go:852] Lowering resyncPeriod to 52\nE0807 01:04:36.068996    2348 openshift-tuned.go:881] Pod event watch channel closed.\nI0807 01:04:36.069041    2348 openshift-tuned.go:883] Increasing resyncPeriod to 104\nI0807 01:06:20.069214    2348 openshift-tuned.go:209] Extracting tuned profiles\nI0807 01:06:20.071479    2348 openshift-tuned.go:739] Resync period to pull node/pod labels: 104 [s]\nI0807 01:06:20.090554    2348 openshift-tuned.go:550] Pod (openshift-image-registry/image-registry-54797c68f-5fq2t) labels changed node wide: true\nI0807 01:06:25.086006    2348 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:06:25.087498    2348 openshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0807 01:06:25.088597    2348 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:06:25.255337    2348 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 01:06:29.646208    2348 openshift-tuned.go:550] Pod (e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-882/pod-configmap-7f824df6-b129-44e5-ad8a-0e1c529fef68) labels changed node wide: false\nI0807 01:06:33.881422    2348 openshift-tuned.go:550] Pod (e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-882/pod-configmap-7f824df6-b129-44e5-ad8a-0e1c529fef68) labels changed node wide: false\nI0807 01:08:04.076209    2348 openshift-tuned.go:852] Lowering resyncPeriod to 52\nE0807 01:14:57.671000    2348 openshift-tuned.go:881] Pod event watch channel closed.\nI0807 01:14:57.671021    2348 openshift-tuned.go:883] Increasing resyncPeriod to 104\n
Aug 07 01:16:19.072 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-93.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/07 01:01:38 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 07 01:16:19.072 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-93.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/07 01:01:39 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/07 01:01:39 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 01:01:39 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 01:01:39 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/07 01:01:39 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 01:01:39 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/07 01:01:39 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 01:01:39 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/07 01:01:39 http.go:106: HTTPS: listening on [::]:9091\n2020/08/07 01:02:04 oauthproxy.go:774: basicauth: 10.131.0.12:37146 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/07 01:06:35 oauthproxy.go:774: basicauth: 10.131.0.12:41738 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/07 01:11:05 oauthproxy.go:774: basicauth: 10.131.0.12:46844 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/07 01:13:06 oauthproxy.go:774: basicauth: 10.128.0.24:38176 Authorization header does not start with 'Basic', skipping basic authentication\n2020/08/07 01:15:35 oauthproxy.go:774: basicauth: 10.131.0.12:52324 Authorization header does not start with 'Basic', skipping basic authentication\n2
Aug 07 01:16:19.072 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-93.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-07T01:01:38.760109883Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-08-07T01:01:38.760249815Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-07T01:01:38.762724828Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-07T01:01:43.944773351Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 07 01:16:19.935 E ns/openshift-monitoring pod/prometheus-adapter-7645bd7d79-pg58l node/ip-10-0-132-61.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0807 00:59:40.863089       1 adapter.go:93] successfully using in-cluster auth\nI0807 00:59:41.700393       1 secure_serving.go:116] Serving securely on [::]:6443\n
Aug 07 01:16:24.799 E ns/openshift-monitoring pod/node-exporter-pf9d9 node/ip-10-0-137-79.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:54:22Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:54:22Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 01:16:25.079 E ns/openshift-monitoring pod/prometheus-adapter-7645bd7d79-qmsxv node/ip-10-0-151-93.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0807 00:59:51.496635       1 adapter.go:93] successfully using in-cluster auth\nI0807 00:59:53.608778       1 secure_serving.go:116] Serving securely on [::]:6443\n
Aug 07 01:16:25.102 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-93.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/07 01:00:41 Watching directory: "/etc/alertmanager/config"\n
Aug 07 01:16:25.102 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-93.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/07 01:00:41 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/07 01:00:41 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 01:00:41 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 01:00:41 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/07 01:00:41 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 01:00:41 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/07 01:00:41 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 01:00:41 http.go:106: HTTPS: listening on [::]:9095\n
Aug 07 01:16:28.887 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 1:16:28.173245       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 01:16:28.173251       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 01:16:28.173259       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 01:16:28.173267       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 01:16:28.173272       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 01:16:28.173277       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 01:16:28.173283       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 01:16:28.173288       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 01:16:28.173293       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 01:16:28.173300       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 01:16:28.173313       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 01:16:28.173320       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 01:16:28.173326       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 01:16:28.173336       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 01:16:28.173366       1 server.go:692] external host was not specified, using 10.0.134.8\nI0807 01:16:28.173525       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 01:16:28.173796       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 01:16:33.132 E ns/openshift-monitoring pod/thanos-querier-9d55979f4-w2b7d node/ip-10-0-151-93.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/08/07 01:01:33 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/07 01:01:33 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 01:01:33 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 01:01:33 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/07 01:01:33 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 01:01:33 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/08/07 01:01:33 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 01:01:33 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/07 01:01:33 http.go:106: HTTPS: listening on [::]:9091\n
Aug 07 01:16:34.139 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-93.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-07T01:16:24.969Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-07T01:16:24.975Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-07T01:16:24.975Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-07T01:16:24.977Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-07T01:16:24.977Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-07T01:16:24.977Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-07T01:16:24.977Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-07T01:16:24.977Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-07T01:16:24.977Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-07T01:16:24.977Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-07T01:16:24.977Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-07T01:16:24.977Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-07T01:16:24.977Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-07T01:16:24.977Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-07T01:16:24.978Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-07T01:16:24.978Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-07
Aug 07 01:16:38.157 E ns/openshift-ingress pod/router-default-5444bd6884-6hhlc node/ip-10-0-151-93.us-east-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:15:51.632002       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:15:56.628811       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:16:01.623107       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:16:06.689460       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:16:11.656506       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:16:16.676704       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:16:21.649880       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:16:26.633545       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:16:31.625136       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:16:36.623269       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Aug 07 01:16:41.054 E ns/openshift-monitoring pod/node-exporter-skznr node/ip-10-0-132-61.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:58:42Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:58:42Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 01:16:52.930 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 1:16:51.911985       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 01:16:51.911996       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 01:16:51.912006       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 01:16:51.912015       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 01:16:51.912025       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 01:16:51.912035       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 01:16:51.912044       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 01:16:51.912054       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 01:16:51.912064       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 01:16:51.912074       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 01:16:51.912088       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 01:16:51.912099       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 01:16:51.912110       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 01:16:51.912122       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 01:16:51.912166       1 server.go:692] external host was not specified, using 10.0.134.8\nI0807 01:16:51.912306       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 01:16:51.912584       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 01:16:57.867 E ns/openshift-monitoring pod/node-exporter-79swq node/ip-10-0-132-131.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:58:44Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:58:44Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 01:17:06.003 E ns/openshift-controller-manager pod/controller-manager-7v675 node/ip-10-0-137-79.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Aug 07 01:17:06.175 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-132-61.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-07T01:17:02.255Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-07T01:17:02.260Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-07T01:17:02.261Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-07T01:17:02.262Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-07T01:17:02.262Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-07T01:17:02.262Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-07T01:17:02.262Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-07T01:17:02.262Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-07T01:17:02.262Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-07T01:17:02.262Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-07T01:17:02.262Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-07T01:17:02.262Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-07T01:17:02.262Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-07T01:17:02.262Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-07T01:17:02.264Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-07T01:17:02.264Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-07
Aug 07 01:17:09.565 E ns/openshift-monitoring pod/node-exporter-mqtb8 node/ip-10-0-147-237.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T00:54:23Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T00:54:23Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 01:17:16.262 E ns/openshift-marketplace pod/redhat-operators-6d85f68f8-hvfv6 node/ip-10-0-151-93.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Aug 07 01:17:23.112 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): 1:17:22.899708       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0807 01:17:22.899717       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0807 01:17:22.899726       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0807 01:17:22.899735       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0807 01:17:22.899744       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0807 01:17:22.899752       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0807 01:17:22.899761       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0807 01:17:22.899770       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0807 01:17:22.899778       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0807 01:17:22.899787       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0807 01:17:22.899799       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0807 01:17:22.899810       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0807 01:17:22.899826       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0807 01:17:22.899838       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0807 01:17:22.899884       1 server.go:692] external host was not specified, using 10.0.134.8\nI0807 01:17:22.900065       1 server.go:735] Initializing cache sizes based on 0MB limit\nI0807 01:17:22.901136       1 server.go:193] Version: v0.0.0-master+$Format:%h$\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Aug 07 01:17:25.096 E ns/openshift-service-ca pod/configmap-cabundle-injector-c594d94c9-4qr9m node/ip-10-0-137-79.us-east-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Aug 07 01:17:25.097 E ns/openshift-service-ca pod/apiservice-cabundle-injector-cfc5c4c5-vbpf2 node/ip-10-0-134-8.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Aug 07 01:17:25.122 E ns/openshift-service-ca pod/service-serving-cert-signer-85669ff857-8dlqs node/ip-10-0-137-79.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Aug 07 01:17:47.227 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=kube-controller-manager-7 container exited with code 255 (Error):      1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/autoscaling.openshift.io/v1beta1/machineautoscalers?allowWatchBookmarks=true&resourceVersion=16359&timeout=9m31s&timeoutSeconds=571&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:17:46.193490       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/console.openshift.io/v1/consoleexternalloglinks?allowWatchBookmarks=true&resourceVersion=16334&timeout=7m21s&timeoutSeconds=441&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:17:46.193617       1 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operators.coreos.com/v1alpha1/catalogsources?allowWatchBookmarks=true&resourceVersion=26207&timeout=8m53s&timeoutSeconds=533&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:17:46.194735       1 reflector.go:280] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.BrokerTemplateInstance: Get https://localhost:6443/apis/template.openshift.io/v1/brokertemplateinstances?allowWatchBookmarks=true&resourceVersion=22371&timeout=5m36s&timeoutSeconds=336&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:17:46.195856       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=15714&timeout=7m26s&timeoutSeconds=446&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0807 01:17:46.197959       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0807 01:17:46.198044       1 controllermanager.go:291] leaderelection lost\n
Aug 07 01:17:47.250 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=scheduler container exited with code 255 (Error): alhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=15711&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:17:46.065158       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=15718&timeout=9m41s&timeoutSeconds=581&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:17:46.066296       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=26477&timeoutSeconds=357&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:17:46.068059       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=25245&timeout=8m16s&timeoutSeconds=496&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:17:46.069201       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=25990&timeout=9m26s&timeoutSeconds=566&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0807 01:17:46.070342       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=18844&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0807 01:17:46.267785       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0807 01:17:46.267820       1 server.go:264] leaderelection lost\n
Aug 07 01:17:53.733 E ns/openshift-console pod/console-666b56bbc6-qvjmq node/ip-10-0-147-237.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/08/7 01:00:59 cmd/main: cookies are secure!\n2020/08/7 01:00:59 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 01:01:09 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 01:01:19 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/08/7 01:01:29 cmd/main: Binding to [::]:8443...\n2020/08/7 01:01:29 cmd/main: using TLS\n2020/08/7 01:16:12 http: TLS handshake error from 10.129.2.5:57958: read tcp 10.128.0.41:8443->10.129.2.5:57958: read: connection reset by peer\n
Aug 07 01:20:09.872 E ns/openshift-sdn pod/sdn-controller-v5glr node/ip-10-0-134-8.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0807 00:49:21.382762       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 07 01:20:10.216 E ns/openshift-sdn pod/ovs-rtkgq node/ip-10-0-132-131.us-east-2.compute.internal container=openvswitch container exited with code 1 (Error): e last 0 s (4 deletes)\n2020-08-07T01:16:12.790Z|00275|bridge|INFO|bridge br0: deleted interface veth56065656 on port 6\n2020-08-07T01:16:14.792Z|00033|jsonrpc|WARN|unix#940: receive error: Connection reset by peer\n2020-08-07T01:16:14.792Z|00034|reconnect|WARN|unix#940: connection dropped (Connection reset by peer)\n2020-08-07T01:16:14.735Z|00276|bridge|INFO|bridge br0: added interface veth8eec5b49 on port 29\n2020-08-07T01:16:14.774Z|00277|connmgr|INFO|br0<->unix#1066: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T01:16:14.815Z|00278|connmgr|INFO|br0<->unix#1070: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:16:17.675Z|00279|bridge|INFO|bridge br0: added interface vethb1b40ba0 on port 30\n2020-08-07T01:16:17.707Z|00280|connmgr|INFO|br0<->unix#1076: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T01:16:17.747Z|00281|connmgr|INFO|br0<->unix#1079: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:16:25.342Z|00282|bridge|INFO|bridge br0: added interface vethfdb34d4c on port 31\n2020-08-07T01:16:25.386Z|00283|connmgr|INFO|br0<->unix#1088: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T01:16:25.451Z|00284|connmgr|INFO|br0<->unix#1092: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:16:25.453Z|00285|connmgr|INFO|br0<->unix#1094: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-08-07T01:16:40.608Z|00286|connmgr|INFO|br0<->unix#1106: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:16:40.650Z|00287|connmgr|INFO|br0<->unix#1109: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:16:40.676Z|00288|bridge|INFO|bridge br0: deleted interface vethe8f82ca8 on port 14\n2020-08-07T01:16:43.801Z|00289|connmgr|INFO|br0<->unix#1112: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:16:43.833Z|00290|connmgr|INFO|br0<->unix#1115: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:16:43.864Z|00291|bridge|INFO|bridge br0: deleted interface veth3bd1f3d5 on port 13\n2020-08-07 01:20:09 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:20:16.249 E ns/openshift-sdn pod/sdn-ckwz6 node/ip-10-0-132-131.us-east-2.compute.internal container=sdn container exited with code 255 (Error):  service "openshift-kube-scheduler/scheduler:https"\nI0807 01:18:37.857621    2049 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:18:37.857643    2049 proxier.go:350] userspace syncProxyRules took 26.573455ms\nI0807 01:18:50.890125    2049 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-controller-manager/controller-manager:https to [10.128.0.62:8443 10.129.0.76:8443 10.130.0.60:8443]\nI0807 01:18:50.890175    2049 roundrobin.go:218] Delete endpoint 10.128.0.62:8443 for service "openshift-controller-manager/controller-manager:https"\nI0807 01:18:51.012281    2049 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:18:51.012304    2049 proxier.go:350] userspace syncProxyRules took 27.010976ms\nI0807 01:19:21.129031    2049 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:19:21.129056    2049 proxier.go:350] userspace syncProxyRules took 26.63952ms\nI0807 01:19:51.244260    2049 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:19:51.244285    2049 proxier.go:350] userspace syncProxyRules took 26.382179ms\nI0807 01:20:06.924486    2049 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.10:6443 10.130.0.4:6443]\nI0807 01:20:06.924521    2049 roundrobin.go:218] Delete endpoint 10.129.0.4:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 01:20:07.048563    2049 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:20:07.048583    2049 proxier.go:350] userspace syncProxyRules took 26.581908ms\nI0807 01:20:15.688103    2049 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0807 01:20:16.002106    2049 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 01:20:16.002185    2049 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 01:20:24.665 E ns/openshift-sdn pod/sdn-controller-ww5zf node/ip-10-0-137-79.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0807 00:49:21.915721       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 07 01:20:35.701 E ns/openshift-sdn pod/ovs-sj8k2 node/ip-10-0-137-79.us-east-2.compute.internal container=openvswitch container exited with code 1 (Error): 6 on port 23\n2020-08-07T01:17:05.105Z|00363|connmgr|INFO|br0<->unix#1870: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:17:05.136Z|00364|connmgr|INFO|br0<->unix#1873: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:17:05.160Z|00365|bridge|INFO|bridge br0: deleted interface vethf873bcb7 on port 24\n2020-08-07T01:17:15.155Z|00366|bridge|INFO|bridge br0: added interface veth520205a0 on port 60\n2020-08-07T01:17:15.191Z|00367|connmgr|INFO|br0<->unix#1882: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T01:17:15.255Z|00368|connmgr|INFO|br0<->unix#1886: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-08-07T01:17:15.257Z|00369|connmgr|INFO|br0<->unix#1888: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:17:16.540Z|00370|bridge|INFO|bridge br0: added interface veth2a1363e9 on port 61\n2020-08-07T01:17:16.573Z|00371|connmgr|INFO|br0<->unix#1891: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T01:17:16.616Z|00372|connmgr|INFO|br0<->unix#1894: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:17:24.434Z|00373|connmgr|INFO|br0<->unix#1904: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:17:24.472Z|00374|connmgr|INFO|br0<->unix#1907: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:17:24.507Z|00375|bridge|INFO|bridge br0: deleted interface veth55aec58d on port 3\n2020-08-07T01:17:24.710Z|00376|connmgr|INFO|br0<->unix#1910: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:17:24.740Z|00377|connmgr|INFO|br0<->unix#1913: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:17:24.762Z|00378|bridge|INFO|bridge br0: deleted interface vethef86d0bc on port 4\n2020-08-07T01:18:02.176Z|00379|connmgr|INFO|br0<->unix#1943: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:18:02.218Z|00380|connmgr|INFO|br0<->unix#1946: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:18:02.260Z|00381|bridge|INFO|bridge br0: deleted interface veth605b2e47 on port 38\n2020-08-07 01:20:34 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:20:37.830 E ns/openshift-multus pod/multus-lvqtb node/ip-10-0-151-93.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 07 01:20:39.240 E ns/openshift-sdn pod/sdn-controller-nfwrx node/ip-10-0-147-237.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): twork/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 13429 (16955)\nW0807 01:02:24.013617       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 13428 (16955)\nI0807 01:06:18.310740       1 vnids.go:115] Allocated netid 16613966 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-4081"\nI0807 01:06:18.324471       1 vnids.go:115] Allocated netid 6342376 for namespace "e2e-k8s-service-lb-available-4383"\nI0807 01:06:18.333160       1 vnids.go:115] Allocated netid 14875750 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-65"\nI0807 01:06:18.359396       1 vnids.go:115] Allocated netid 9706664 for namespace "e2e-k8s-sig-apps-job-upgrade-8149"\nI0807 01:06:18.379194       1 vnids.go:115] Allocated netid 16080522 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-6826"\nI0807 01:06:18.412380       1 vnids.go:115] Allocated netid 15703199 for namespace "e2e-control-plane-available-8794"\nI0807 01:06:18.423770       1 vnids.go:115] Allocated netid 3401812 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-882"\nI0807 01:06:18.449448       1 vnids.go:115] Allocated netid 1580954 for namespace "e2e-frontend-ingress-available-8510"\nI0807 01:06:18.468075       1 vnids.go:115] Allocated netid 6698837 for namespace "e2e-k8s-sig-apps-deployment-upgrade-1013"\nW0807 01:17:35.305834       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 16955 (26079)\nW0807 01:17:35.309517       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 18329 (26078)\nW0807 01:17:35.309656       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 18617 (23699)\n
Aug 07 01:20:39.760 E ns/openshift-sdn pod/sdn-s5jnx node/ip-10-0-137-79.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ice "openshift-kube-scheduler/scheduler:https"\nI0807 01:18:37.889807    2991 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:18:37.889915    2991 proxier.go:350] userspace syncProxyRules took 32.500737ms\nI0807 01:18:50.897024    2991 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-controller-manager/controller-manager:https to [10.128.0.62:8443 10.129.0.76:8443 10.130.0.60:8443]\nI0807 01:18:50.897062    2991 roundrobin.go:218] Delete endpoint 10.128.0.62:8443 for service "openshift-controller-manager/controller-manager:https"\nI0807 01:18:51.037546    2991 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:18:51.037569    2991 proxier.go:350] userspace syncProxyRules took 31.640591ms\nI0807 01:19:21.162732    2991 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:19:21.162755    2991 proxier.go:350] userspace syncProxyRules took 28.595542ms\nI0807 01:19:51.285760    2991 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:19:51.285783    2991 proxier.go:350] userspace syncProxyRules took 29.310023ms\nI0807 01:20:06.929203    2991 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.10:6443 10.130.0.4:6443]\nI0807 01:20:06.929371    2991 roundrobin.go:218] Delete endpoint 10.129.0.4:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 01:20:07.069327    2991 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:20:07.069361    2991 proxier.go:350] userspace syncProxyRules took 29.592613ms\nI0807 01:20:37.222302    2991 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:20:37.222330    2991 proxier.go:350] userspace syncProxyRules took 38.687987ms\nI0807 01:20:39.085644    2991 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 01:20:39.085678    2991 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 01:20:46.795 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:20:57.050 E ns/openshift-sdn pod/ovs-kr9sk node/ip-10-0-134-8.us-east-2.compute.internal container=openvswitch container exited with code 1 (Error):  in the last 0 s (1 deletes)\n2020-08-07T01:20:37.371Z|00448|connmgr|INFO|br0<->unix#2205: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T01:20:37.396Z|00449|connmgr|INFO|br0<->unix#2208: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T01:20:37.426Z|00450|connmgr|INFO|br0<->unix#2211: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T01:20:37.493Z|00451|connmgr|INFO|br0<->unix#2214: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:20:37.522Z|00452|connmgr|INFO|br0<->unix#2217: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:20:37.546Z|00453|connmgr|INFO|br0<->unix#2220: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:20:37.583Z|00454|connmgr|INFO|br0<->unix#2223: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:20:37.618Z|00455|connmgr|INFO|br0<->unix#2226: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:20:37.643Z|00456|connmgr|INFO|br0<->unix#2229: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:20:37.670Z|00457|connmgr|INFO|br0<->unix#2232: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:20:37.698Z|00458|connmgr|INFO|br0<->unix#2235: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:20:37.735Z|00459|connmgr|INFO|br0<->unix#2238: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:20:37.762Z|00460|connmgr|INFO|br0<->unix#2241: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:20:38.229Z|00461|connmgr|INFO|br0<->unix#2245: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:20:38.258Z|00462|connmgr|INFO|br0<->unix#2248: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:20:38.284Z|00463|bridge|INFO|bridge br0: deleted interface veth5ab77296 on port 5\n2020-08-07T01:20:40.601Z|00464|bridge|INFO|bridge br0: added interface veth1a774b2d on port 79\n2020-08-07T01:20:40.638Z|00465|connmgr|INFO|br0<->unix#2253: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T01:20:40.679Z|00466|connmgr|INFO|br0<->unix#2256: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07 01:20:56 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:21:00.077 E ns/openshift-sdn pod/sdn-npf2n node/ip-10-0-134-8.us-east-2.compute.internal container=sdn container exited with code 255 (Error): =5353 dport=35167 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\nudp      17 25 src=10.129.0.76 dst=172.30.0.10 sport=39662 dport=53 src=10.131.0.7 dst=10.129.0.76 sport=5353 dport=39662 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n", error message: exit status 1\nI0807 01:20:37.753621   83081 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0807 01:20:37.753656   83081 cmd.go:173] openshift-sdn network plugin registering startup\nI0807 01:20:37.753796   83081 cmd.go:177] openshift-sdn network plugin ready\nI0807 01:20:38.294331   83081 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-p22hp\nI0807 01:20:40.657039   83081 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-vw6t5 got IP 10.129.0.78, ofport 79\nI0807 01:20:43.991716   83081 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.10:6443 10.129.0.78:6443 10.130.0.4:6443]\nI0807 01:20:43.991766   83081 roundrobin.go:218] Delete endpoint 10.129.0.78:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 01:20:44.017918   83081 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.10:6443 10.129.0.78:6443]\nI0807 01:20:44.017961   83081 roundrobin.go:218] Delete endpoint 10.130.0.4:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 01:20:44.138587   83081 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:20:44.138615   83081 proxier.go:350] userspace syncProxyRules took 29.630194ms\nI0807 01:20:44.264141   83081 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:20:44.264170   83081 proxier.go:350] userspace syncProxyRules took 31.231335ms\nI0807 01:20:59.044332   83081 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 01:20:59.044486   83081 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 01:21:14.692 E ns/openshift-multus pod/multus-admission-controller-bcqfg node/ip-10-0-137-79.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Aug 07 01:21:16.702 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6646cb477d-6b5zn node/ip-10-0-137-79.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): 22225)\nW0807 01:17:35.575066       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.APIServer ended with: too old resource version: 20102 (22225)\nW0807 01:17:35.575107       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 18617 (21164)\nW0807 01:17:35.575316       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Project ended with: too old resource version: 21225 (23678)\nW0807 01:17:35.575460       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 21208 (23672)\nW0807 01:17:35.575504       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 22429 (22970)\nW0807 01:17:35.575533       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 18511 (21164)\nW0807 01:17:35.575613       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 22228 (25890)\nW0807 01:17:35.621761       1 reflector.go:299] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: watch of *unstructured.Unstructured ended with: too old resource version: 21850 (23727)\nE0807 01:21:16.514767       1 leaderelection.go:330] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver-operator/configmaps/openshift-apiserver-operator-lock?timeout=35s: context deadline exceeded\nI0807 01:21:16.514771       1 leaderelection.go:287] failed to renew lease openshift-apiserver-operator/openshift-apiserver-operator-lock: failed to tryAcquireOrRenew context deadline exceeded\nF0807 01:21:16.514837       1 leaderelection.go:66] leaderelection lost\n
Aug 07 01:21:20.829 E ns/openshift-multus pod/multus-fcpwr node/ip-10-0-132-131.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 07 01:21:28.495 E ns/openshift-sdn pod/sdn-bx4c8 node/ip-10-0-147-237.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 0304 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.10:6443 10.129.0.78:6443]\nI0807 01:20:44.017442   80304 roundrobin.go:218] Delete endpoint 10.130.0.4:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 01:20:44.142146   80304 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:20:44.142176   80304 proxier.go:350] userspace syncProxyRules took 32.443641ms\nI0807 01:20:44.266152   80304 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:20:44.266173   80304 proxier.go:350] userspace syncProxyRules took 28.483103ms\nI0807 01:21:14.388781   80304 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:21:14.388806   80304 proxier.go:350] userspace syncProxyRules took 27.306607ms\nI0807 01:21:16.698496   80304 roundrobin.go:298] LoadBalancerRR: Removing endpoints for openshift-apiserver-operator/metrics:https\nI0807 01:21:16.820654   80304 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:21:16.820678   80304 proxier.go:350] userspace syncProxyRules took 28.700701ms\nI0807 01:21:17.701378   80304 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-apiserver-operator/metrics:https to [10.130.0.49:8443]\nI0807 01:21:17.701413   80304 roundrobin.go:218] Delete endpoint 10.130.0.49:8443 for service "openshift-apiserver-operator/metrics:https"\nI0807 01:21:17.850298   80304 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:21:17.850324   80304 proxier.go:350] userspace syncProxyRules took 45.019429ms\nI0807 01:21:19.872526   80304 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0807 01:21:28.022409   80304 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 01:21:28.022438   80304 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 01:21:44.847 E ns/openshift-sdn pod/ovs-s7qmp node/ip-10-0-132-61.us-east-2.compute.internal container=openvswitch container exited with code 1 (Error):  flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:16:43.522Z|00165|bridge|INFO|bridge br0: added interface veth9001b5bf on port 28\n2020-08-07T01:16:43.556Z|00166|connmgr|INFO|br0<->unix#1040: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T01:16:43.602Z|00167|connmgr|INFO|br0<->unix#1043: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:16:55.126Z|00168|bridge|INFO|bridge br0: added interface veth5f23341f on port 29\n2020-08-07T01:16:55.181Z|00169|connmgr|INFO|br0<->unix#1052: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T01:16:55.247Z|00170|connmgr|INFO|br0<->unix#1055: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:21:05.979Z|00171|connmgr|INFO|br0<->unix#1239: 2 flow_mods in the last 0 s (2 adds)\n2020-08-07T01:21:06.084Z|00172|connmgr|INFO|br0<->unix#1244: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:21:06.157Z|00173|connmgr|INFO|br0<->unix#1252: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T01:21:06.305Z|00174|connmgr|INFO|br0<->unix#1255: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:21:06.330Z|00175|connmgr|INFO|br0<->unix#1258: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:21:06.358Z|00176|connmgr|INFO|br0<->unix#1261: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:21:06.380Z|00177|connmgr|INFO|br0<->unix#1264: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:21:06.421Z|00178|connmgr|INFO|br0<->unix#1267: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:21:06.447Z|00179|connmgr|INFO|br0<->unix#1270: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:21:06.476Z|00180|connmgr|INFO|br0<->unix#1273: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:21:06.501Z|00181|connmgr|INFO|br0<->unix#1276: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:21:06.535Z|00182|connmgr|INFO|br0<->unix#1279: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:21:06.560Z|00183|connmgr|INFO|br0<->unix#1282: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07 01:21:44 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:21:54.887 E ns/openshift-sdn pod/sdn-9zhgw node/ip-10-0-132-61.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 07 01:21:16.697472   52954 roundrobin.go:298] LoadBalancerRR: Removing endpoints for openshift-apiserver-operator/metrics:https\nI0807 01:21:16.817100   52954 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:21:16.817127   52954 proxier.go:350] userspace syncProxyRules took 28.045216ms\nI0807 01:21:17.701374   52954 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-apiserver-operator/metrics:https to [10.130.0.49:8443]\nI0807 01:21:17.701408   52954 roundrobin.go:218] Delete endpoint 10.130.0.49:8443 for service "openshift-apiserver-operator/metrics:https"\nI0807 01:21:17.822651   52954 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:21:17.822676   52954 proxier.go:350] userspace syncProxyRules took 29.061801ms\nI0807 01:21:29.741960   52954 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.10:6443 10.129.0.78:6443 10.130.0.61:6443]\nI0807 01:21:29.742007   52954 roundrobin.go:218] Delete endpoint 10.130.0.61:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 01:21:29.759467   52954 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.78:6443 10.130.0.61:6443]\nI0807 01:21:29.759498   52954 roundrobin.go:218] Delete endpoint 10.128.0.10:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 01:21:29.865191   52954 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:21:29.865214   52954 proxier.go:350] userspace syncProxyRules took 28.760214ms\nI0807 01:21:29.984454   52954 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:21:29.984484   52954 proxier.go:350] userspace syncProxyRules took 28.148726ms\nI0807 01:21:53.805107   52954 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 01:21:53.805162   52954 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 01:22:00.565 E ns/openshift-multus pod/multus-admission-controller-258cb node/ip-10-0-147-237.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Aug 07 01:22:12.084 E ns/openshift-sdn pod/ovs-ttsth node/ip-10-0-151-93.us-east-2.compute.internal container=openvswitch container exited with code 1 (Error): s in the last 0 s (2 deletes)\n2020-08-07T01:17:21.185Z|00221|connmgr|INFO|br0<->unix#1173: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:17:21.211Z|00222|bridge|INFO|bridge br0: deleted interface vethbe8cc094 on port 9\n2020-08-07T01:17:21.196Z|00028|jsonrpc|WARN|unix#1018: receive error: Connection reset by peer\n2020-08-07T01:17:21.196Z|00029|reconnect|WARN|unix#1018: connection dropped (Connection reset by peer)\n2020-08-07T01:21:18.261Z|00223|connmgr|INFO|br0<->unix#1346: 2 flow_mods in the last 0 s (2 adds)\n2020-08-07T01:21:18.312Z|00224|connmgr|INFO|br0<->unix#1350: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:21:18.359Z|00225|connmgr|INFO|br0<->unix#1358: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T01:21:18.386Z|00226|connmgr|INFO|br0<->unix#1361: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T01:21:18.411Z|00227|connmgr|INFO|br0<->unix#1364: 1 flow_mods in the last 0 s (1 deletes)\n2020-08-07T01:21:18.518Z|00228|connmgr|INFO|br0<->unix#1367: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:21:18.547Z|00229|connmgr|INFO|br0<->unix#1370: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:21:18.571Z|00230|connmgr|INFO|br0<->unix#1373: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:21:18.605Z|00231|connmgr|INFO|br0<->unix#1376: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:21:18.638Z|00232|connmgr|INFO|br0<->unix#1379: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:21:18.667Z|00233|connmgr|INFO|br0<->unix#1382: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:21:18.702Z|00234|connmgr|INFO|br0<->unix#1385: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:21:18.728Z|00235|connmgr|INFO|br0<->unix#1388: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07T01:21:18.757Z|00236|connmgr|INFO|br0<->unix#1391: 3 flow_mods in the last 0 s (3 adds)\n2020-08-07T01:21:18.780Z|00237|connmgr|INFO|br0<->unix#1394: 1 flow_mods in the last 0 s (1 adds)\n2020-08-07 01:22:11 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:22:18.068 E ns/openshift-sdn pod/sdn-5khpz node/ip-10-0-151-93.us-east-2.compute.internal container=sdn container exited with code 255 (Error): e proxy: processing 0 service events\nI0807 01:21:29.867279   91749 proxier.go:350] userspace syncProxyRules took 30.516197ms\nI0807 01:21:29.990488   91749 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:21:29.990512   91749 proxier.go:350] userspace syncProxyRules took 30.173536ms\nI0807 01:21:53.887213   91749 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-4383/service-test: to [10.131.0.22:80]\nI0807 01:21:53.887249   91749 roundrobin.go:218] Delete endpoint 10.128.2.14:80 for service "e2e-k8s-service-lb-available-4383/service-test:"\nI0807 01:21:54.020880   91749 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:21:54.020907   91749 proxier.go:350] userspace syncProxyRules took 29.143731ms\nI0807 01:21:55.890744   91749 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-4383/service-test: to [10.128.2.14:80 10.131.0.22:80]\nI0807 01:21:55.890786   91749 roundrobin.go:218] Delete endpoint 10.128.2.14:80 for service "e2e-k8s-service-lb-available-4383/service-test:"\nI0807 01:21:56.016454   91749 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:21:56.016485   91749 proxier.go:350] userspace syncProxyRules took 29.66307ms\nI0807 01:22:11.629480   91749 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.63:6443 10.129.0.78:6443 10.130.0.61:6443]\nI0807 01:22:11.629517   91749 roundrobin.go:218] Delete endpoint 10.128.0.63:6443 for service "openshift-multus/multus-admission-controller:"\nI0807 01:22:11.765883   91749 proxier.go:371] userspace proxy: processing 0 service events\nI0807 01:22:11.765908   91749 proxier.go:350] userspace syncProxyRules took 34.794655ms\nI0807 01:22:16.994618   91749 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0807 01:22:16.994685   91749 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Aug 07 01:22:21.930 E ns/openshift-multus pod/multus-xtzn8 node/ip-10-0-137-79.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 07 01:23:07.533 E ns/openshift-multus pod/multus-6bht6 node/ip-10-0-134-8.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Aug 07 01:25:14.221 E ns/openshift-machine-config-operator pod/machine-config-operator-569f69467d-cn2qw node/ip-10-0-147-237.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): : 18511 (23702)\nW0807 01:17:35.278830       1 reflector.go:299] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.CustomResourceDefinition ended with: too old resource version: 19709 (23687)\nW0807 01:17:35.292278       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 16360 (26001)\nW0807 01:17:35.319475       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 20281 (24540)\nW0807 01:17:35.321713       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20009 (25890)\nW0807 01:17:35.425602       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 18804 (23725)\nW0807 01:17:35.425790       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 16379 (23936)\nW0807 01:17:35.425957       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 16358 (23978)\nW0807 01:17:35.715049       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 16957 (26495)\nW0807 01:17:35.765527       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 16365 (26495)\nW0807 01:17:35.786363       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 16957 (26495)\n
Aug 07 01:27:19.571 E ns/openshift-machine-config-operator pod/machine-config-daemon-kfj9l node/ip-10-0-147-237.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:27:31.472 E ns/openshift-machine-config-operator pod/machine-config-daemon-t5h2v node/ip-10-0-134-8.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:27:46.506 E ns/openshift-machine-config-operator pod/machine-config-daemon-rzs4l node/ip-10-0-132-131.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:27:56.977 E ns/openshift-machine-config-operator pod/machine-config-daemon-slhxc node/ip-10-0-137-79.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:28:02.913 E ns/openshift-machine-config-operator pod/machine-config-daemon-5d4fv node/ip-10-0-132-61.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:30:08.394 E ns/openshift-machine-config-operator pod/machine-config-server-n962t node/ip-10-0-137-79.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0807 00:50:26.264455       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0807 00:50:26.265289       1 api.go:56] Launching server on :22624\nI0807 00:50:26.265355       1 api.go:56] Launching server on :22623\nI0807 00:55:32.496622       1 api.go:102] Pool worker requested by 10.0.144.145:4731\n
Aug 07 01:30:18.041 E ns/openshift-machine-config-operator pod/machine-config-server-zqtmf node/ip-10-0-134-8.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0807 00:50:27.110231       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0807 00:50:27.110998       1 api.go:56] Launching server on :22624\nI0807 00:50:27.111051       1 api.go:56] Launching server on :22623\n
Aug 07 01:30:19.492 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-93.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/08/07 01:16:30 Watching directory: "/etc/alertmanager/config"\n
Aug 07 01:30:19.492 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-93.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/08/07 01:16:31 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/07 01:16:31 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 01:16:31 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 01:16:32 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/08/07 01:16:32 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 01:16:32 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/08/07 01:16:32 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 01:16:32 http.go:106: HTTPS: listening on [::]:9095\n
Aug 07 01:30:19.523 E ns/openshift-ingress pod/router-default-776c666bb7-5fccz node/ip-10-0-151-93.us-east-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:27:18.370819       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:27:23.359398       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:27:30.651289       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:27:35.650236       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:27:42.808999       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:27:47.808674       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:27:56.559587       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:28:01.558271       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:28:06.555211       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0807 01:28:15.989841       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Aug 07 01:30:21.373 E ns/openshift-console-operator pod/console-operator-7bd785d476-xg5vq node/ip-10-0-134-8.us-east-2.compute.internal container=console-operator container exited with code 255 (Error): 0.0.1-2020-08-07-000123\nE0807 01:17:53.458418       1 status.go:73] SyncLoopRefreshProgressing InProgress Working toward version 0.0.1-2020-08-07-000123\nE0807 01:17:53.458448       1 status.go:73] DeploymentAvailable FailedUpdate 2 replicas ready at version 0.0.1-2020-08-07-000123\nI0807 01:18:01.920492       1 status_controller.go:175] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-08-07T00:55:03Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-08-07T01:18:01Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-08-07T01:18:01Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-08-07T00:55:03Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0807 01:18:01.939274       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"ff605b2e-35f6-4796-b146-35c7553687ef", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False (""),Available changed from False to True ("")\nW0807 01:22:08.875807       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 28575 (28580)\nW0807 01:29:36.982431       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 30669 (31266)\nW0807 01:29:39.727232       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 31266 (31278)\nI0807 01:30:20.250445       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 01:30:20.250595       1 leaderelection.go:66] leaderelection lost\n
Aug 07 01:30:24.716 E ns/openshift-service-ca pod/apiservice-cabundle-injector-779fd79d9b-qss8l node/ip-10-0-134-8.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Aug 07 01:30:25.834 E ns/openshift-service-ca pod/service-serving-cert-signer-55dc6f778d-xclsf node/ip-10-0-134-8.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Aug 07 01:30:26.544 E ns/openshift-machine-config-operator pod/machine-config-server-kbkd5 node/ip-10-0-147-237.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0807 00:50:26.336499       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0807 00:50:26.337492       1 api.go:56] Launching server on :22624\nI0807 00:50:26.337536       1 api.go:56] Launching server on :22623\nI0807 00:55:26.864878       1 api.go:102] Pool worker requested by 10.0.144.145:5046\nI0807 00:55:30.326799       1 api.go:102] Pool worker requested by 10.0.144.145:31532\n
Aug 07 01:30:40.972 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-132-131.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-08-07T01:30:34.928Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-08-07T01:30:34.931Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-08-07T01:30:34.932Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-08-07T01:30:34.933Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-08-07T01:30:34.933Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-08-07T01:30:34.933Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-08-07T01:30:34.933Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-08-07T01:30:34.933Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-08-07T01:30:34.933Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-08-07T01:30:34.933Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-08-07T01:30:34.933Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-08-07T01:30:34.933Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-08-07T01:30:34.933Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-08-07T01:30:34.933Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-08-07T01:30:34.934Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-08-07T01:30:34.934Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-08-07
Aug 07 01:31:12.388 E ns/openshift-marketplace pod/redhat-operators-c965f6876-gvtjs node/ip-10-0-132-61.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Aug 07 01:31:28.571 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Alertmanager host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io alertmanager-main)
Aug 07 01:31:54.514 E ns/openshift-marketplace pod/community-operators-5c78cf4766-98wp8 node/ip-10-0-132-61.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Aug 07 01:32:53.750 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): vision has been compacted\nE0807 01:30:37.685152       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 01:30:37.685289       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 01:30:37.693228       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 01:30:37.693447       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 01:30:37.693570       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 01:30:37.693722       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 01:30:37.693759       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 01:30:37.693805       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0807 01:30:37.695015       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0807 01:30:37.884349       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-134-8.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0807 01:30:37.884694       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nW0807 01:30:38.168876       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.137.79 10.0.147.237]\nI0807 01:30:38.193157       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-134-8.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n
Aug 07 01:32:53.750 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0807 01:16:28.542827       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Aug 07 01:32:53.750 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0807 01:28:13.908760       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:28:13.909350       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0807 01:28:14.115439       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:28:14.115772       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Aug 07 01:32:53.774 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error): I0807 01:13:43.822526       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0807 01:13:43.824091       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0807 01:13:43.824146       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0807 01:17:46.640174       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0807 01:18:00.412269       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Aug 07 01:32:53.774 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:29:20.513592       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:29:20.514039       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:29:30.522113       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:29:30.522472       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:29:40.531765       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:29:40.532108       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:29:50.540258       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:29:50.540631       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:30:00.549407       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:30:00.550466       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:30:10.558619       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:30:10.558977       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:30:20.580564       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:30:20.581013       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:30:30.606920       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:30:30.607260       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Aug 07 01:32:53.774 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): :52.155478       1 webhook.go:107] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0807 01:17:52.155543       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0807 01:17:54.211925       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0807 01:18:00.644261       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0807 01:18:03.590011       1 webhook.go:107] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0807 01:18:03.590046       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0807 01:18:03.762397       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0807 01:18:12.887769       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Aug 07 01:32:53.843 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-8.us-east-2.compute.internal node/ip-10-0-134-8.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): nshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1596761390" (2020-08-07 00:50:00 +0000 UTC to 2022-08-07 00:50:01 +0000 UTC (now=2020-08-07 01:18:13.93808991 +0000 UTC))\nI0807 01:18:13.938985       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1596763068" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1596763068" (2020-08-07 00:17:47 +0000 UTC to 2021-08-07 00:17:47 +0000 UTC (now=2020-08-07 01:18:13.938963154 +0000 UTC))\nI0807 01:18:13.939200       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1596763068" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1596763068" (2020-08-07 00:17:47 +0000 UTC to 2021-08-07 00:17:47 +0000 UTC (now=2020-08-07 01:18:13.939170064 +0000 UTC))\nI0807 01:18:13.954597       1 node_tree.go:93] Added node "ip-10-0-132-61.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0807 01:18:13.966851       1 node_tree.go:93] Added node "ip-10-0-134-8.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0807 01:18:13.968453       1 node_tree.go:93] Added node "ip-10-0-137-79.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0807 01:18:13.972016       1 node_tree.go:93] Added node "ip-10-0-147-237.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2b" to NodeTree\nI0807 01:18:13.972246       1 node_tree.go:93] Added node "ip-10-0-151-93.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2b" to NodeTree\nI0807 01:18:13.972360       1 node_tree.go:93] Added node "ip-10-0-132-131.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0807 01:18:14.025823       1 leaderelection.go:241] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n
Aug 07 01:32:53.890 E ns/openshift-monitoring pod/node-exporter-2l6bq node/ip-10-0-134-8.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T01:17:35Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T01:17:35Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 01:32:53.911 E ns/openshift-cluster-node-tuning-operator pod/tuned-nhss6 node/ip-10-0-151-93.us-east-2.compute.internal container=tuned container exited with code 143 (Error): o:550] Pod (openshift-machine-config-operator/machine-config-daemon-fsc4b) labels changed node wide: true\nI0807 01:27:19.661707   74509 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:27:19.663536   74509 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:27:19.783103   74509 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 01:30:20.376352   74509 openshift-tuned.go:550] Pod (openshift-console/downloads-65c4b685d7-cnkl9) labels changed node wide: true\nI0807 01:30:24.661684   74509 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:30:24.663152   74509 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:30:24.779668   74509 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 01:30:26.016540   74509 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0807 01:30:29.661726   74509 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:30:29.663166   74509 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:30:29.781535   74509 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 01:30:56.165265   74509 openshift-tuned.go:550] Pod (openshift-ingress/router-default-776c666bb7-5fccz) labels changed node wide: true\nI0807 01:30:59.661735   74509 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:30:59.663302   74509 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:30:59.780439   74509 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0807 01:31:06.002052   74509 openshift-tuned.go:550] Pod (e2e-k8s-service-lb-available-4383/service-test-25jx5) labels changed node wide: true\n
Aug 07 01:32:53.918 E ns/openshift-sdn pod/sdn-controller-nxvgj node/ip-10-0-134-8.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0807 01:20:23.876147       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Aug 07 01:32:53.931 E ns/openshift-monitoring pod/node-exporter-92zwh node/ip-10-0-151-93.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T01:16:22Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T01:16:22Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 01:32:53.931 E ns/openshift-cluster-node-tuning-operator pod/tuned-kvxsv node/ip-10-0-134-8.us-east-2.compute.internal container=tuned container exited with code 143 (Error): tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-4-ip-10-0-134-8.us-east-2.compute.internal) labels changed node wide: false\nI0807 01:30:25.905454   72174 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-7-ip-10-0-134-8.us-east-2.compute.internal) labels changed node wide: false\nI0807 01:30:26.117368   72174 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-3-ip-10-0-134-8.us-east-2.compute.internal) labels changed node wide: true\nI0807 01:30:27.065082   72174 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:30:27.066796   72174 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:30:27.198268   72174 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 01:30:31.588600   72174 openshift-tuned.go:550] Pod (openshift-cloud-credential-operator/cloud-credential-operator-5b9b4c8565-lvzv8) labels changed node wide: true\nI0807 01:30:32.065607   72174 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:30:32.068283   72174 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:30:32.253218   72174 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 01:30:32.254194   72174 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/olm-operator-76ff8bb4f5-tnsg6) labels changed node wide: true\nI0807 01:30:37.065176   72174 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:30:37.066563   72174 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:30:37.192434   72174 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 01:30:37.646465   72174 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-134-8.us-east-2.compute.internal) labels changed node wide: true\n
Aug 07 01:32:53.947 E ns/openshift-multus pod/multus-kvbjc node/ip-10-0-151-93.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 07 01:32:53.972 E ns/openshift-controller-manager pod/controller-manager-kctxh node/ip-10-0-134-8.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Aug 07 01:32:53.984 E ns/openshift-sdn pod/ovs-vcths node/ip-10-0-151-93.us-east-2.compute.internal container=openvswitch container exited with code 1 (Error): onnection reset by peer\n2020-08-07T01:30:18.838Z|00024|reconnect|WARN|unix#430: connection dropped (Connection reset by peer)\n2020-08-07T01:30:18.175Z|00136|connmgr|INFO|br0<->unix#458: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:18.238Z|00137|connmgr|INFO|br0<->unix#461: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:18.274Z|00138|bridge|INFO|bridge br0: deleted interface vethde5b8ea3 on port 7\n2020-08-07T01:30:18.341Z|00139|connmgr|INFO|br0<->unix#464: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:18.421Z|00140|connmgr|INFO|br0<->unix#467: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:18.461Z|00141|bridge|INFO|bridge br0: deleted interface veth9cbe7226 on port 11\n2020-08-07T01:30:18.509Z|00142|connmgr|INFO|br0<->unix#470: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:18.549Z|00143|connmgr|INFO|br0<->unix#473: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:18.593Z|00144|bridge|INFO|bridge br0: deleted interface vethe7b0bf80 on port 3\n2020-08-07T01:30:18.649Z|00145|connmgr|INFO|br0<->unix#476: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:18.702Z|00146|connmgr|INFO|br0<->unix#479: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:18.768Z|00147|bridge|INFO|bridge br0: deleted interface veth5673bce4 on port 5\n2020-08-07T01:30:18.830Z|00148|connmgr|INFO|br0<->unix#483: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:18.891Z|00149|connmgr|INFO|br0<->unix#486: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:18.919Z|00150|bridge|INFO|bridge br0: deleted interface veth3cac92f5 on port 12\n2020-08-07T01:31:03.035Z|00151|connmgr|INFO|br0<->unix#522: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:31:03.063Z|00152|connmgr|INFO|br0<->unix#525: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:31:03.085Z|00153|bridge|INFO|bridge br0: deleted interface veth2cf90a99 on port 4\n2020-08-07 01:31:06 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:32:54.030 E ns/openshift-machine-config-operator pod/machine-config-daemon-rc54r node/ip-10-0-151-93.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:32:54.058 E ns/openshift-multus pod/multus-admission-controller-vw6t5 node/ip-10-0-134-8.us-east-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Aug 07 01:32:54.082 E ns/openshift-sdn pod/ovs-9dqx6 node/ip-10-0-134-8.us-east-2.compute.internal container=openvswitch container exited with code 1 (Error): in the last 0 s (4 deletes)\n2020-08-07T01:30:23.372Z|00267|bridge|INFO|bridge br0: deleted interface vethadcaf561 on port 31\n2020-08-07T01:30:23.732Z|00268|connmgr|INFO|br0<->unix#712: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:23.810Z|00269|connmgr|INFO|br0<->unix#715: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:23.845Z|00270|bridge|INFO|bridge br0: deleted interface vethe3b3f4d2 on port 3\n2020-08-07T01:30:24.014Z|00271|connmgr|INFO|br0<->unix#718: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:24.067Z|00272|connmgr|INFO|br0<->unix#721: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:24.096Z|00273|bridge|INFO|bridge br0: deleted interface vethcdbf62aa on port 7\n2020-08-07T01:30:24.262Z|00274|connmgr|INFO|br0<->unix#724: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:24.297Z|00275|connmgr|INFO|br0<->unix#727: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:24.330Z|00276|bridge|INFO|bridge br0: deleted interface veth1183dcfc on port 12\n2020-08-07T01:30:24.628Z|00277|connmgr|INFO|br0<->unix#730: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:24.665Z|00278|connmgr|INFO|br0<->unix#733: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:24.708Z|00279|bridge|INFO|bridge br0: deleted interface vethee9b9575 on port 6\n2020-08-07T01:30:25.173Z|00280|connmgr|INFO|br0<->unix#738: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:25.203Z|00281|connmgr|INFO|br0<->unix#741: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:25.225Z|00282|bridge|INFO|bridge br0: deleted interface vethf31edf72 on port 24\n2020-08-07T01:30:25.551Z|00283|connmgr|INFO|br0<->unix#744: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:30:25.588Z|00284|connmgr|INFO|br0<->unix#747: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:30:25.616Z|00285|bridge|INFO|bridge br0: deleted interface vethc6300415 on port 25\n2020-08-07 01:30:37 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:32:54.098 E ns/openshift-multus pod/multus-v65s9 node/ip-10-0-134-8.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 07 01:32:54.210 E ns/openshift-machine-config-operator pod/machine-config-server-t4f75 node/ip-10-0-134-8.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0807 01:30:23.402115       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0807 01:30:23.409682       1 api.go:56] Launching server on :22623\nI0807 01:30:23.410475       1 api.go:56] Launching server on :22624\n
Aug 07 01:32:54.239 E ns/openshift-machine-config-operator pod/machine-config-daemon-b8svq node/ip-10-0-134-8.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:32:56.527 E ns/openshift-monitoring pod/node-exporter-92zwh node/ip-10-0-151-93.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:32:56.562 E ns/openshift-multus pod/multus-kvbjc node/ip-10-0-151-93.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:32:58.542 E ns/openshift-multus pod/multus-kvbjc node/ip-10-0-151-93.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:33:00.046 E ns/openshift-multus pod/multus-v65s9 node/ip-10-0-134-8.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:33:01.574 E ns/openshift-machine-config-operator pod/machine-config-daemon-rc54r node/ip-10-0-151-93.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 07 01:33:02.332 E ns/openshift-multus pod/multus-v65s9 node/ip-10-0-134-8.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:33:03.769 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Aug 07 01:33:04.644 E ns/openshift-multus pod/multus-v65s9 node/ip-10-0-134-8.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:33:05.738 E ns/openshift-machine-config-operator pod/machine-config-daemon-b8svq node/ip-10-0-134-8.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 07 01:33:11.353 E ns/openshift-marketplace pod/community-operators-84cf5d5c7b-w7v5v node/ip-10-0-132-131.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Aug 07 01:33:11.420 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-132-131.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/08/07 01:30:36 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Aug 07 01:33:11.420 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-132-131.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/08/07 01:30:38 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/07 01:30:38 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/08/07 01:30:38 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/08/07 01:30:43 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/08/07 01:30:43 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/08/07 01:30:43 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/08/07 01:30:43 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/08/07 01:30:43 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/08/07 01:30:43 http.go:106: HTTPS: listening on [::]:9091\n
Aug 07 01:33:11.420 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-132-131.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-08-07T01:30:36.612565529Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-08-07T01:30:36.612684744Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-08-07T01:30:36.614377697Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-08-07T01:30:41.741905967Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Aug 07 01:33:17.778 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6646cb477d-6b5zn node/ip-10-0-137-79.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): rning' reason: 'OpenShiftAPICheckFailed' "quota.openshift.io.v1" failed with HTTP status code 503 (the server is currently unable to handle the request)\nI0807 01:31:14.038426       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"42e07602-a368-433b-be1e-7bf64ee1effb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available message changed from "Available: \"authorization.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"build.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"oauth.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"quota.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"template.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)" to "Available: \"build.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"oauth.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)\nAvailable: \"quota.openshift.io.v1\" is not ready: 503 (the server is currently unable to handle the request)"\nI0807 01:31:19.676191       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"42e07602-a368-433b-be1e-7bf64ee1effb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("")\nI0807 01:33:16.928100       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 01:33:16.928294       1 leaderelection.go:66] leaderelection lost\n
Aug 07 01:33:18.039 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Aug 07 01:33:20.103 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-876c5c9fd-sm2lz node/ip-10-0-137-79.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error):  [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0807 01:29:40.532108       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\nI0807 01:29:50.540258       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0807 01:29:50.540631       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\nI0807 01:30:00.549407       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0807 01:30:00.550466       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\nI0807 01:30:10.558619       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0807 01:30:10.558977       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\nI0807 01:30:20.580564       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0807 01:30:20.581013       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\nI0807 01:30:30.606920       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\\nI0807 01:30:30.607260       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-134-8.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-134-8.us-east-2.compute.internal container=\"cluster-policy-controller-7\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-134-8.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-134-8.us-east-2.compute.internal container=\"kube-controller-manager-7\" is not ready"\nI0807 01:33:19.159203       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 01:33:19.159334       1 leaderelection.go:66] leaderelection lost\n
Aug 07 01:33:21.249 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7f66dd8df4-n7d98 node/ip-10-0-137-79.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): @1596763068\\\" (2020-08-07 00:17:47 +0000 UTC to 2021-08-07 00:17:47 +0000 UTC (now=2020-08-07 01:18:13.938963154 +0000 UTC))\\nI0807 01:18:13.939200       1 named_certificates.go:74] snimap[\\\"apiserver-loopback-client\\\"]: \\\"apiserver-loopback-client@1596763068\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1596763068\\\" (2020-08-07 00:17:47 +0000 UTC to 2021-08-07 00:17:47 +0000 UTC (now=2020-08-07 01:18:13.939170064 +0000 UTC))\\nI0807 01:18:13.954597       1 node_tree.go:93] Added node \\\"ip-10-0-132-61.us-east-2.compute.internal\\\" in group \\\"us-east-2:\\\\x00:us-east-2a\\\" to NodeTree\\nI0807 01:18:13.966851       1 node_tree.go:93] Added node \\\"ip-10-0-134-8.us-east-2.compute.internal\\\" in group \\\"us-east-2:\\\\x00:us-east-2a\\\" to NodeTree\\nI0807 01:18:13.968453       1 node_tree.go:93] Added node \\\"ip-10-0-137-79.us-east-2.compute.internal\\\" in group \\\"us-east-2:\\\\x00:us-east-2a\\\" to NodeTree\\nI0807 01:18:13.972016       1 node_tree.go:93] Added node \\\"ip-10-0-147-237.us-east-2.compute.internal\\\" in group \\\"us-east-2:\\\\x00:us-east-2b\\\" to NodeTree\\nI0807 01:18:13.972246       1 node_tree.go:93] Added node \\\"ip-10-0-151-93.us-east-2.compute.internal\\\" in group \\\"us-east-2:\\\\x00:us-east-2b\\\" to NodeTree\\nI0807 01:18:13.972360       1 node_tree.go:93] Added node \\\"ip-10-0-132-131.us-east-2.compute.internal\\\" in group \\\"us-east-2:\\\\x00:us-east-2a\\\" to NodeTree\\nI0807 01:18:14.025823       1 leaderelection.go:241] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-134-8.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-134-8.us-east-2.compute.internal container=\"scheduler\" is not ready"\nI0807 01:33:20.019700       1 cmd.go:78] Received SIGTERM or SIGINT signal, shutting down controller.\nF0807 01:33:20.019874       1 leaderelection.go:66] leaderelection lost\n
Aug 07 01:34:41.795 E kube-apiserver Kube API started failing: Get https://api.ci-op-320tg6yx-9e0a9.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Aug 07 01:34:46.795 - 225s  E kube-apiserver Kube API is not responding to GET requests
Aug 07 01:34:46.795 - 225s  E openshift-apiserver OpenShift API is not responding to GET requests
Aug 07 01:38:37.807 E ns/openshift-machine-config-operator pod/machine-config-daemon-rnrj9 node/ip-10-0-132-61.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:38:37.808 E ns/openshift-multus pod/multus-qxstr node/ip-10-0-132-61.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:38:37.808 E ns/openshift-multus pod/multus-qxstr node/ip-10-0-132-61.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Aug 07 01:39:08.450 E ns/openshift-monitoring pod/node-exporter-phfwq node/ip-10-0-147-237.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 8-07T01:17:19Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-08-07T01:17:19Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Aug 07 01:39:08.465 E ns/openshift-controller-manager pod/controller-manager-n9r8d node/ip-10-0-147-237.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Aug 07 01:39:08.514 E ns/openshift-sdn pod/sdn-controller-6w8jq node/ip-10-0-147-237.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): ting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0807 01:20:47.961294       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"57ba7be5-ac3d-4cc8-9481-4a30c1a522a2", ResourceVersion:"27851", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732358161, loc:(*time.Location)(0x2b7dcc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-147-237\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-08-07T00:49:21Z\",\"renewTime\":\"2020-08-07T01:20:47Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-147-237 became leader'\nI0807 01:20:47.961393       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0807 01:20:47.967612       1 master.go:51] Initializing SDN master\nI0807 01:20:47.979782       1 network_controller.go:60] Started OpenShift Network Controller\nW0807 01:30:38.640630       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 26078 (32579)\nW0807 01:33:46.355292       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 26079 (35640)\n
Aug 07 01:39:08.528 E ns/openshift-sdn pod/ovs-gz5hg node/ip-10-0-147-237.us-east-2.compute.internal container=openvswitch container exited with code 1 (Error): s)\n2020-08-07T01:36:28.066Z|00325|connmgr|INFO|br0<->unix#1138: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:36:28.102Z|00326|bridge|INFO|bridge br0: deleted interface vethec54bcfd on port 33\n2020-08-07T01:36:29.499Z|00327|bridge|INFO|bridge br0: added interface veth0d275c80 on port 43\n2020-08-07T01:36:29.536Z|00328|connmgr|INFO|br0<->unix#1142: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T01:36:29.588Z|00329|connmgr|INFO|br0<->unix#1146: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:36:29.590Z|00330|connmgr|INFO|br0<->unix#1148: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-08-07T01:36:32.522Z|00331|connmgr|INFO|br0<->unix#1156: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:36:32.553Z|00332|connmgr|INFO|br0<->unix#1159: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:36:32.575Z|00333|bridge|INFO|bridge br0: deleted interface veth0d275c80 on port 43\n2020-08-07T01:36:40.726Z|00334|bridge|INFO|bridge br0: added interface veth779df89f on port 44\n2020-08-07T01:36:40.761Z|00335|connmgr|INFO|br0<->unix#1166: 5 flow_mods in the last 0 s (5 adds)\n2020-08-07T01:36:40.809Z|00336|connmgr|INFO|br0<->unix#1170: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:36:40.812Z|00337|connmgr|INFO|br0<->unix#1172: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-08-07T01:36:42.655Z|00338|connmgr|INFO|br0<->unix#1177: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:36:42.704Z|00339|connmgr|INFO|br0<->unix#1180: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:36:42.750Z|00340|bridge|INFO|bridge br0: deleted interface veth779df89f on port 44\n2020-08-07T01:36:48.115Z|00341|connmgr|INFO|br0<->unix#1186: 2 flow_mods in the last 0 s (2 deletes)\n2020-08-07T01:36:48.146Z|00342|connmgr|INFO|br0<->unix#1189: 4 flow_mods in the last 0 s (4 deletes)\n2020-08-07T01:36:48.172Z|00343|bridge|INFO|bridge br0: deleted interface veth43585d91 on port 42\n2020-08-07 01:36:50 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Aug 07 01:39:08.555 E ns/openshift-multus pod/multus-admission-controller-87dqg node/ip-10-0-147-237.us-east-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Aug 07 01:39:08.579 E ns/openshift-machine-config-operator pod/machine-config-daemon-6p2tm node/ip-10-0-147-237.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Aug 07 01:39:08.610 E ns/openshift-machine-config-operator pod/machine-config-server-lgkt7 node/ip-10-0-147-237.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0807 01:30:46.576406       1 start.go:38] Version: machine-config-daemon-4.3.27-202006211650.p0-10-g55f73172-dirty (55f7317224e7d8badc98879662771a14185e5739)\nI0807 01:30:46.612718       1 api.go:56] Launching server on :22624\nI0807 01:30:46.613133       1 api.go:56] Launching server on :22623\n
Aug 07 01:39:08.625 E ns/openshift-cluster-node-tuning-operator pod/tuned-cfx2f node/ip-10-0-147-237.us-east-2.compute.internal container=tuned container exited with code 143 (Error): er-7-ip-10-0-147-237.us-east-2.compute.internal) labels changed node wide: false\nI0807 01:36:23.578750  117618 openshift-tuned.go:550] Pod (openshift-console/console-58545c45c5-zffch) labels changed node wide: true\nI0807 01:36:26.980081  117618 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:36:26.984145  117618 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:36:27.229685  117618 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 01:36:27.238746  117618 openshift-tuned.go:550] Pod (openshift-machine-api/machine-api-controllers-587f7cc696-29fr5) labels changed node wide: true\nI0807 01:36:31.971688  117618 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:36:31.973225  117618 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:36:32.093533  117618 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 01:36:37.035442  117618 openshift-tuned.go:550] Pod (openshift-insights/insights-operator-6f9cb4fd7f-w9wjp) labels changed node wide: true\nI0807 01:36:41.972140  117618 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0807 01:36:41.973614  117618 openshift-tuned.go:441] Getting recommended profile...\nI0807 01:36:42.163039  117618 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0807 01:36:49.645886  117618 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-7df776fbd5-j2644) labels changed node wide: true\nI0807 01:36:50.315868  117618 openshift-tuned.go:137] Received signal: terminated\nI0807 01:36:50.315958  117618 openshift-tuned.go:304] Sending TERM to PID 117801\n2020-08-07 01:36:50,315 INFO     tuned.daemon.controller: terminating controller\n2020-08-07 01:36:50,315 INFO     tuned.daemon.daemon: stopping tuning\n
Aug 07 01:39:11.497 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): starthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nhealthz check failed\nI0807 01:36:50.385509       1 healthz.go:191] [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-discovery-available ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/openshift.io-requestheader-reload ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-kubernetes-informers-synched ok\n[+]poststarthook/openshift.io-clientCA-reload ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nhealthz check failed\nW0807 01:36:50.391704       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.134.8 10.0.137.79]\nI0807 01:36:50.402980       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-147-237.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n
Aug 07 01:39:11.497 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0807 01:13:52.046491       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Aug 07 01:39:11.497 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0807 01:35:47.108166       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:35:47.110268       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0807 01:35:47.319752       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:35:47.320118       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Aug 07 01:39:11.526 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error): rce version: 23721 (35640)\nW0807 01:33:46.368260       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Role ended with: too old resource version: 23721 (35640)\nW0807 01:33:46.369544       1 reflector.go:289] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: watch of *v1.ClusterResourceQuota ended with: too old resource version: 23758 (35640)\nI0807 01:34:15.914576       1 trace.go:81] Trace[1556703567]: "Reflector github.com/openshift/client-go/apps/informers/externalversions/factory.go:101 ListAndWatch" (started: 2020-08-07 01:33:45.912314915 +0000 UTC m=+1076.608229232) (total time: 30.002231948s):\nTrace[1556703567]: [30.002231948s] [30.002231948s] END\nE0807 01:34:15.914607       1 reflector.go:126] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io)\nI0807 01:34:16.040213       1 trace.go:81] Trace[1628697196]: "Reflector github.com/openshift/client-go/route/informers/externalversions/factory.go:101 ListAndWatch" (started: 2020-08-07 01:33:46.038186455 +0000 UTC m=+1076.734100496) (total time: 30.001986853s):\nTrace[1628697196]: [30.001986853s] [30.001986853s] END\nE0807 01:34:16.040240       1 reflector.go:126] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nE0807 01:34:26.560152       1 reflector.go:126] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io)\nE0807 01:36:50.113602       1 reflector.go:270] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\n
Aug 07 01:39:11.526 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:35:47.082955       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:35:47.083243       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:35:47.083649       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:35:47.083847       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:35:50.997374       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:35:50.997699       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:36:01.008106       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:36:01.008549       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:36:11.020037       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:36:11.020852       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:36:21.049200       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:36:21.049653       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:36:31.059835       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:36:31.060288       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0807 01:36:41.072754       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0807 01:36:41.073058       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Aug 07 01:39:11.526 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): anager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0807 01:15:35.873919       1 webhook.go:107] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0807 01:15:35.873955       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0807 01:15:46.052013       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found]\n
Aug 07 01:39:14.880 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-147-237.us-east-2.compute.internal node/ip-10-0-147-237.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): ilable: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0807 01:36:32.862963       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-69df5879c9-8fzwd: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0807 01:36:35.360470       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-69df5879c9-8fzwd: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0807 01:36:37.582752       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-69df5879c9-8fzwd: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0807 01:36:42.584026       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-69df5879c9-8fzwd: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0807 01:36:43.086387       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-5c86884859-qw4bp is bound successfully on node "ip-10-0-134-8.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16416932Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15265956Ki>|Pods<250>|StorageEphemeral<114381692328>.".\n
Aug 07 01:39:15.992 E ns/openshift-multus pod/multus-dwf87 node/ip-10-0-147-237.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:39:17.532 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io prometheus-k8s)
Aug 07 01:39:18.589 E ns/openshift-machine-config-operator pod/machine-config-daemon-6p2tm node/ip-10-0-147-237.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Aug 07 01:39:20.277 E ns/openshift-multus pod/multus-dwf87 node/ip-10-0-147-237.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Aug 07 01:39:22.525 E ns/openshift-multus pod/multus-dwf87 node/ip-10-0-147-237.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending