ResultSUCCESS
Tests 3 failed / 20 succeeded
Started2020-03-06 13:38
Elapsed1h24m
Work namespaceci-op-0c30lzpk
Refs release-4.3:3ce21b38
298:d955283e
podb011033a-5faf-11ea-b052-0a58ac100730
repoopenshift/cluster-api-provider-aws
revision1

Test Failures


Cluster upgrade control-plane-upgrade 33m55s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\scontrol\-plane\-upgrade$'
API was unreachable during upgrade for at least 1m11s:

Mar 06 14:35:07.530 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Mar 06 14:35:07.561 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:43:51.530 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Mar 06 14:43:51.560 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:44:07.530 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Mar 06 14:44:08.530 - 3s    E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:44:11.715 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:47:07.530 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Mar 06 14:47:07.559 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:47:18.185 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:47:18.256 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:47:24.326 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:47:24.382 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:47:27.398 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:47:27.530 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:47:30.499 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:47:36.614 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:47:36.647 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:47:45.830 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:47:46.530 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:47:48.937 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:47:51.973 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:47:52.012 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:47:58.118 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:47:58.530 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:48:01.222 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:48:07.334 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:48:07.530 - 2s    E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:48:10.434 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:48:19.622 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:48:19.652 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:48:22.693 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:48:23.530 E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:48:23.559 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:48:25.767 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Mar 06 14:48:26.530 E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:48:26.563 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:49:43.406 E kube-apiserver Kube API started failing: etcdserver: request timed out
Mar 06 14:49:43.459 I kube-apiserver Kube API started responding to GET requests
Mar 06 14:49:50.530 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Mar 06 14:49:51.530 - 13s   E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:50:05.574 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:50:21.530 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Mar 06 14:50:21.570 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:50:37.530 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Mar 06 14:50:38.530 - 43s   E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:51:22.561 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:51:38.530 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Mar 06 14:51:38.561 I openshift-apiserver OpenShift API started responding to GET requests
Mar 06 14:51:55.530 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Mar 06 14:51:55.560 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1583506516.xml

Filter through log files


Cluster upgrade k8s-service-upgrade 34m56s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sk8s\-service\-upgrade$'
Service was unreachable during upgrade for at least 31s:

Mar 06 14:21:51.556 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests over new connections
Mar 06 14:21:51.556 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests on reused connections
Mar 06 14:21:52.548 - 3s    E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service is not responding to GET requests on reused connections
Mar 06 14:21:52.548 - 3s    E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service is not responding to GET requests over new connections
Mar 06 14:21:56.620 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests on reused connections
Mar 06 14:21:56.621 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests over new connections
Mar 06 14:21:57.556 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests over new connections
Mar 06 14:21:58.548 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service is not responding to GET requests over new connections
Mar 06 14:21:58.631 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests over new connections
Mar 06 14:22:00.556 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests over new connections
Mar 06 14:22:01.548 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service is not responding to GET requests over new connections
Mar 06 14:22:01.619 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests over new connections
Mar 06 14:22:02.556 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests over new connections
Mar 06 14:22:03.548 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service is not responding to GET requests over new connections
Mar 06 14:22:03.629 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests over new connections
Mar 06 14:22:12.560 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests over new connections
Mar 06 14:22:13.548 - 999ms E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service is not responding to GET requests over new connections
Mar 06 14:22:14.621 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests over new connections
Mar 06 14:22:39.556 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests over new connections
Mar 06 14:22:40.548 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service is not responding to GET requests over new connections
Mar 06 14:22:40.635 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests over new connections
Mar 06 14:34:05.548 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests on reused connections
Mar 06 14:34:06.549 - 7s    E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service is not responding to GET requests on reused connections
Mar 06 14:34:14.904 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests on reused connections
Mar 06 14:34:27.550 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests on reused connections
Mar 06 14:34:27.657 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests on reused connections
Mar 06 14:35:44.548 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests on reused connections
Mar 06 14:35:44.613 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests on reused connections
Mar 06 14:36:09.548 E ns/e2e-k8s-service-upgrade-3597 svc/service-test Service stopped responding to GET requests on reused connections
Mar 06 14:36:09.605 I ns/e2e-k8s-service-upgrade-3597 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1583506516.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 35m0s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
214 error level events were detected during this test run:

Mar 06 14:22:00.177 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 508)
Mar 06 14:22:17.065 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-596d59895f-5q694 node/ip-10-0-131-229.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:24:00.409 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-5959f6b469-htgnn node/ip-10-0-131-229.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ient-go/config/informers/externalversions/factory.go:101: watch of *v1.Scheduler ended with: too old resource version: 6235 (14907)\nW0306 14:17:44.076946       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 10002 (14369)\nW0306 14:17:44.077646       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14164 (14368)\nW0306 14:17:44.077730       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 14752 (14982)\nW0306 14:17:44.077781       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 9469 (14371)\nW0306 14:17:44.077823       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14196 (14368)\nW0306 14:17:44.116872       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14189 (14368)\nW0306 14:17:44.146274       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.RoleBinding ended with: too old resource version: 10538 (14382)\nW0306 14:17:44.360874       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14106 (14368)\nW0306 14:19:33.573946       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 16413 (16597)\nW0306 14:22:13.708336       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 17682 (18104)\nI0306 14:23:59.346151       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0306 14:23:59.346349       1 leaderelection.go:66] leaderelection lost\n
Mar 06 14:25:28.697 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-7cb4759f68-jcfzz node/ip-10-0-131-229.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): .071767       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Image ended with: too old resource version: 8929 (14905)\nW0306 14:17:44.071899       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 6238 (14901)\nW0306 14:17:44.072163       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14106 (14368)\nW0306 14:17:44.072552       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 6242 (14903)\nW0306 14:17:44.072641       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 12419 (14982)\nW0306 14:17:44.073712       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.OpenShiftAPIServer ended with: too old resource version: 9930 (14907)\nW0306 14:17:44.145978       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14172 (14368)\nW0306 14:17:44.303849       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 6173 (14372)\nW0306 14:19:33.573110       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 16413 (16597)\nW0306 14:22:13.707476       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 17682 (18104)\nI0306 14:25:27.906048       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0306 14:25:27.906127       1 leaderelection.go:66] leaderelection lost\n
Mar 06 14:25:45.775 E ns/openshift-machine-api pod/machine-api-operator-594cdf86dc-r55dq node/ip-10-0-131-229.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Mar 06 14:27:23.837 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-205.us-east-2.compute.internal node/ip-10-0-132-205.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): g   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods. (default "10.0.0.0/24")\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Mar 06 14:27:55.060 E ns/openshift-machine-api pod/machine-api-controllers-6484554c57-mj2wf node/ip-10-0-154-22.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Mar 06 14:28:43.754 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-5ffcf6c99d-rkb2f node/ip-10-0-132-205.us-east-2.compute.internal container=cluster-storage-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:01.625 E ns/openshift-image-registry pod/image-registry-597db8c9fc-cjn8f node/ip-10-0-150-166.us-east-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:01.847 E ns/openshift-authentication-operator pod/authentication-operator-777cc48b45-762lk node/ip-10-0-132-205.us-east-2.compute.internal container=operator container exited with code 255 (Error): n: 16071 (19373)\nW0306 14:27:23.449122       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 19075 (19931)\nW0306 14:27:23.449244       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 19367 (19931)\nW0306 14:27:23.500728       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 19294 (19371)\nW0306 14:27:23.539026       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 19294 (19371)\nW0306 14:27:23.568122       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 19362 (19931)\nW0306 14:27:23.804534       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 16617 (19618)\nW0306 14:27:23.885591       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.APIServer ended with: too old resource version: 17360 (19615)\nW0306 14:27:23.886290       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 16614 (19634)\nW0306 14:28:17.890950       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20680 (20708)\nW0306 14:28:20.493471       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20708 (20728)\nI0306 14:29:00.819826       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0306 14:29:00.820011       1 leaderelection.go:66] leaderelection lost\n
Mar 06 14:29:04.824 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-150-166.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/03/06 14:18:01 Watching directory: "/etc/alertmanager/config"\n
Mar 06 14:29:04.824 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-150-166.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/03/06 14:18:02 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/06 14:18:02 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/06 14:18:02 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/06 14:18:02 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/03/06 14:18:02 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/06 14:18:02 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/06 14:18:02 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/06 14:18:02 http.go:96: HTTPS: listening on [::]:9095\n
Mar 06 14:29:06.781 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-130-221.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/03/06 14:29:01 Watching directory: "/etc/alertmanager/config"\n
Mar 06 14:29:06.781 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-130-221.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/03/06 14:29:02 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/06 14:29:02 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/06 14:29:02 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/06 14:29:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/03/06 14:29:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/06 14:29:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/06 14:29:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/06 14:29:03 http.go:96: HTTPS: listening on [::]:9095\n
Mar 06 14:29:08.821 E ns/openshift-monitoring pod/telemeter-client-5c5d875b8d-7zp84 node/ip-10-0-130-221.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:08.821 E ns/openshift-monitoring pod/telemeter-client-5c5d875b8d-7zp84 node/ip-10-0-130-221.us-east-2.compute.internal container=reload container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:08.821 E ns/openshift-monitoring pod/telemeter-client-5c5d875b8d-7zp84 node/ip-10-0-130-221.us-east-2.compute.internal container=telemeter-client container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:12.691 E ns/openshift-monitoring pod/prometheus-adapter-5778987bb7-tzzss node/ip-10-0-150-166.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0306 14:18:44.801402       1 adapter.go:93] successfully using in-cluster auth\nI0306 14:18:45.622840       1 secure_serving.go:116] Serving securely on [::]:6443\n
Mar 06 14:29:12.930 E ns/openshift-operator-lifecycle-manager pod/packageserver-5d4b6fdfd4-wsdcc node/ip-10-0-132-205.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:20.571 E ns/openshift-cluster-machine-approver pod/machine-approver-cdf9f7d79-9hskh node/ip-10-0-131-229.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): 1196       1 csr_check.go:178] Failed to retrieve current serving cert: remote error: tls: internal error\nI0306 14:12:54.261300       1 csr_check.go:183] Falling back to machine-api authorization for ip-10-0-150-166.us-east-2.compute.internal\nI0306 14:12:54.416186       1 main.go:196] CSR csr-hf6l8 approved\nI0306 14:12:57.512875       1 main.go:146] CSR csr-8m8v4 added\nI0306 14:12:57.549687       1 csr_check.go:418] retrieving serving cert from ip-10-0-130-221.us-east-2.compute.internal (10.0.130.221:10250)\nW0306 14:12:57.551896       1 csr_check.go:178] Failed to retrieve current serving cert: remote error: tls: internal error\nI0306 14:12:57.551916       1 csr_check.go:183] Falling back to machine-api authorization for ip-10-0-130-221.us-east-2.compute.internal\nI0306 14:12:57.582888       1 main.go:196] CSR csr-8m8v4 approved\nI0306 14:13:02.318534       1 main.go:146] CSR csr-c4rwr added\nI0306 14:13:02.370831       1 csr_check.go:418] retrieving serving cert from ip-10-0-129-121.us-east-2.compute.internal (10.0.129.121:10250)\nW0306 14:13:02.371648       1 csr_check.go:178] Failed to retrieve current serving cert: remote error: tls: internal error\nI0306 14:13:02.371777       1 csr_check.go:183] Falling back to machine-api authorization for ip-10-0-129-121.us-east-2.compute.internal\nI0306 14:13:02.380143       1 main.go:196] CSR csr-c4rwr approved\nE0306 14:19:35.035589       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=10471&timeoutSeconds=582&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0306 14:19:36.036294       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Mar 06 14:29:22.435 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7f95b5f4d7-2mgxq node/ip-10-0-131-229.us-east-2.compute.internal container=operator container exited with code 255 (Error):  request.go:538] Throttling request took 193.760975ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nE0306 14:28:55.209789       1 operator.go:158] key failed with : Operation cannot be fulfilled on openshiftcontrollermanagers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again\nI0306 14:29:14.936245       1 request.go:538] Throttling request took 145.712319ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0306 14:29:15.136198       1 request.go:538] Throttling request took 194.249946ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0306 14:29:15.202913       1 status_controller.go:165] clusteroperator/openshift-controller-manager diff {"status":{"conditions":[{"lastTransitionTime":"2020-03-06T14:07:48Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-03-06T14:29:15Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-03-06T14:10:15Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-03-06T14:07:48Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0306 14:29:15.210061       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"65825935-e054-44a3-8ba0-ef75643fd0d8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("")\nI0306 14:29:19.524919       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0306 14:29:19.525085       1 leaderelection.go:66] leaderelection lost\n
Mar 06 14:29:23.010 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-6dcb48hn2 node/ip-10-0-131-229.us-east-2.compute.internal container=operator container exited with code 255 (Error):  streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0306 14:29:19.401003       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.Proxy total 0 items received\nI0306 14:29:19.397130       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0306 14:29:19.402693       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 0 items received\nI0306 14:29:19.397158       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0306 14:29:19.407014       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received\nI0306 14:29:19.408395       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0306 14:29:19.408500       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 0 items received\nI0306 14:29:19.409106       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0306 14:29:19.409320       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 31 items received\nI0306 14:29:19.409834       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0306 14:29:19.409991       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 0 items received\nI0306 14:29:19.410840       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0306 14:29:19.410920       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 0 items received\nI0306 14:29:19.510959       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0306 14:29:19.511121       1 leaderelection.go:66] leaderelection lost\n
Mar 06 14:29:24.059 E ns/openshift-service-ca-operator pod/service-ca-operator-789fdff688-s7ftc node/ip-10-0-131-229.us-east-2.compute.internal container=operator container exited with code 255 (Error): 
Mar 06 14:29:24.407 E ns/openshift-monitoring pod/node-exporter-jkvgr node/ip-10-0-131-229.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 3-06T14:12:49Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-03-06T14:12:49Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Mar 06 14:29:28.337 E ns/openshift-monitoring pod/grafana-78b6c95947-xwd2j node/ip-10-0-129-121.us-east-2.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Mar 06 14:29:29.827 E ns/openshift-operator-lifecycle-manager pod/packageserver-b8f678cb7-nkrmv node/ip-10-0-132-205.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:35.370 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-129-121.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-03-06T14:29:31.900Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-03-06T14:29:31.904Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-03-06T14:29:31.905Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-03-06T14:29:31.907Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-03-06T14:29:31.907Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-03-06T14:29:31.907Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-03-06T14:29:31.907Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-03-06T14:29:31.907Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-03-06T14:29:31.907Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-03-06T14:29:31.907Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-03-06T14:29:31.907Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-03-06T14:29:31.907Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-03-06T14:29:31.907Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-03-06T14:29:31.907Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-03-06T14:29:31.910Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-03-06T14:29:31.910Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-03-06
Mar 06 14:29:38.472 E ns/openshift-cluster-node-tuning-operator pod/tuned-wdrn5 node/ip-10-0-131-229.us-east-2.compute.internal container=tuned container exited with code 143 (Error): .584516   21775 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:27:58.586202   21775 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:27:58.716769   21775 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:28:00.458037   21775 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-7-ip-10-0-131-229.us-east-2.compute.internal) labels changed node wide: false\nI0306 14:28:09.371378   21775 openshift-tuned.go:550] Pod (openshift-kube-apiserver/kube-apiserver-ip-10-0-131-229.us-east-2.compute.internal) labels changed node wide: true\nI0306 14:28:13.584497   21775 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:28:13.586142   21775 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:28:13.711620   21775 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:28:19.680739   21775 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-7-ip-10-0-131-229.us-east-2.compute.internal) labels changed node wide: false\nI0306 14:28:24.982839   21775 openshift-tuned.go:550] Pod (openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-131-229.us-east-2.compute.internal) labels changed node wide: true\nI0306 14:28:28.584522   21775 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:28:28.586248   21775 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:28:28.706820   21775 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:29:19.392003   21775 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0306 14:29:19.399661   21775 openshift-tuned.go:881] Pod event watch channel closed.\nI0306 14:29:19.400119   21775 openshift-tuned.go:883] Increasing resyncPeriod to 130\n
Mar 06 14:29:38.866 E ns/openshift-cluster-node-tuning-operator pod/tuned-cpsww node/ip-10-0-132-205.us-east-2.compute.internal container=tuned container exited with code 143 (Error):  Getting recommended profile...\nI0306 14:29:00.151240   19229 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:29:03.267729   19229 openshift-tuned.go:550] Pod (openshift-authentication-operator/authentication-operator-777cc48b45-762lk) labels changed node wide: true\nI0306 14:29:04.982248   19229 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:29:04.983998   19229 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:29:05.128045   19229 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:29:05.145251   19229 openshift-tuned.go:550] Pod (openshift-image-registry/cluster-image-registry-operator-797bb4bbc8-p2rrk) labels changed node wide: true\nI0306 14:29:09.982293   19229 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:29:09.983820   19229 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:29:10.131575   19229 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:29:11.819098   19229 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/packageserver-5d4b6fdfd4-wsdcc) labels changed node wide: true\nI0306 14:29:14.982270   19229 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:29:14.983879   19229 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:29:15.109070   19229 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:29:19.397847   19229 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0306 14:29:19.413860   19229 openshift-tuned.go:881] Pod event watch channel closed.\nI0306 14:29:19.413942   19229 openshift-tuned.go:883] Increasing resyncPeriod to 120\n
Mar 06 14:29:38.889 E ns/openshift-cluster-node-tuning-operator pod/tuned-lbg7m node/ip-10-0-130-221.us-east-2.compute.internal container=tuned container exited with code 143 (Error):  (openshift-monitoring/alertmanager-main-2) labels changed node wide: true\nI0306 14:28:57.547274    2790 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:28:57.548772    2790 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:28:57.660317    2790 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:29:00.157884    2790 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-adapter-6747d9c6d8-m494l) labels changed node wide: true\nI0306 14:29:02.547234    2790 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:29:02.558434    2790 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:29:02.696463    2790 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:29:03.438015    2790 openshift-tuned.go:550] Pod (openshift-image-registry/node-ca-cb5dp) labels changed node wide: true\nI0306 14:29:07.547246    2790 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:29:07.552463    2790 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:29:07.710456    2790 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:29:14.528656    2790 openshift-tuned.go:550] Pod (openshift-monitoring/alertmanager-main-2) labels changed node wide: true\nI0306 14:29:17.547303    2790 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:29:17.551279    2790 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:29:17.752369    2790 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nE0306 14:29:19.396865    2790 openshift-tuned.go:881] Pod event watch channel closed.\nI0306 14:29:19.396886    2790 openshift-tuned.go:883] Increasing resyncPeriod to 132\n
Mar 06 14:29:43.960 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-150-166.us-east-2.compute.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:43.960 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-150-166.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:43.960 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-150-166.us-east-2.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:43.960 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-150-166.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:43.960 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-150-166.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:43.960 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-150-166.us-east-2.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:43.960 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-150-166.us-east-2.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:44.967 E ns/openshift-monitoring pod/node-exporter-np7xm node/ip-10-0-132-205.us-east-2.compute.internal container=node-exporter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:44.967 E ns/openshift-monitoring pod/node-exporter-np7xm node/ip-10-0-132-205.us-east-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:29:48.941 E ns/openshift-monitoring pod/thanos-querier-b7b766c5b-58vh7 node/ip-10-0-150-166.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/03/06 14:18:53 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/03/06 14:18:53 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/06 14:18:53 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/06 14:18:54 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/03/06 14:18:54 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/06 14:18:54 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/03/06 14:18:54 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/06 14:18:54 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/03/06 14:18:54 http.go:96: HTTPS: listening on [::]:9091\n
Mar 06 14:29:51.963 E ns/openshift-console-operator pod/console-operator-84785b74bb-7k7jg node/ip-10-0-132-205.us-east-2.compute.internal container=console-operator container exited with code 255 (Error): 1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 16614 (19634)\nW0306 14:27:22.960002       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 19629 (19928)\nW0306 14:27:22.960111       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 16071 (19373)\nW0306 14:27:23.491225       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 16616 (19931)\nW0306 14:27:23.491337       1 reflector.go:299] github.com/openshift/client-go/console/informers/externalversions/factory.go:101: watch of *v1.ConsoleCLIDownload ended with: too old resource version: 16630 (19614)\nW0306 14:27:23.809442       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 16631 (19618)\nW0306 14:27:23.811547       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 19320 (19932)\nW0306 14:27:23.815811       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 16631 (19622)\nW0306 14:28:17.906238       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20680 (20708)\nW0306 14:28:20.494671       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 20708 (20728)\nI0306 14:29:50.969436       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0306 14:29:50.969496       1 leaderelection.go:66] leaderelection lost\n
Mar 06 14:29:53.986 E ns/openshift-controller-manager pod/controller-manager-tksnv node/ip-10-0-132-205.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Mar 06 14:30:20.004 E ns/openshift-marketplace pod/certified-operators-55897bcbd-p6tbd node/ip-10-0-150-166.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Mar 06 14:30:23.037 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-150-166.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-03-06T14:29:57.671Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-03-06T14:29:57.675Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-03-06T14:29:57.676Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-03-06T14:29:57.677Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-03-06T14:29:57.677Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-03-06T14:29:57.677Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-03-06T14:29:57.677Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-03-06T14:29:57.677Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-03-06T14:29:57.677Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-03-06T14:29:57.677Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-03-06T14:29:57.677Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-03-06T14:29:57.677Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-03-06T14:29:57.677Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-03-06T14:29:57.678Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-03-06T14:29:57.678Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-03-06T14:29:57.678Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-03-06
Mar 06 14:30:30.142 E ns/openshift-monitoring pod/node-exporter-46vk5 node/ip-10-0-154-22.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 3-06T14:13:49Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-03-06T14:13:49Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Mar 06 14:30:33.828 E ns/openshift-service-ca pod/service-serving-cert-signer-575bc58c9d-wqzzb node/ip-10-0-131-229.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Mar 06 14:30:34.865 E ns/openshift-service-ca pod/configmap-cabundle-injector-54b5fbfcc6-zjgfh node/ip-10-0-131-229.us-east-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:30:46.878 E ns/openshift-controller-manager pod/controller-manager-nfcds node/ip-10-0-131-229.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Mar 06 14:30:53.103 E ns/openshift-monitoring pod/node-exporter-9qv8x node/ip-10-0-150-166.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 3-06T14:13:29Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-03-06T14:13:29Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Mar 06 14:31:06.309 E ns/openshift-console pod/console-7ffb9678-5v9fj node/ip-10-0-154-22.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/03/6 14:17:30 cmd/main: cookies are secure!\n2020/03/6 14:17:30 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/03/6 14:17:40 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/03/6 14:17:50 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/03/6 14:18:00 cmd/main: Binding to [::]:8443...\n2020/03/6 14:18:00 cmd/main: using TLS\n
Mar 06 14:31:15.252 E ns/openshift-console pod/console-7ffb9678-mf8ck node/ip-10-0-132-205.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/03/6 14:17:46 cmd/main: cookies are secure!\n2020/03/6 14:17:46 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/03/6 14:17:56 cmd/main: Binding to [::]:8443...\n2020/03/6 14:17:56 cmd/main: using TLS\n
Mar 06 14:31:31.413 E ns/openshift-controller-manager pod/controller-manager-rl6k8 node/ip-10-0-132-205.us-east-2.compute.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:32:18.555 E ns/openshift-controller-manager pod/controller-manager-b6cxz node/ip-10-0-154-22.us-east-2.compute.internal container=controller-manager container exited with code 137 (OOMKilled): 
Mar 06 14:33:32.788 E ns/openshift-network-operator pod/network-operator-bc7565f5b-7kznb node/ip-10-0-132-205.us-east-2.compute.internal container=network-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:33:54.849 E ns/openshift-sdn pod/sdn-controller-8qk4g node/ip-10-0-132-205.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0306 14:05:28.865010       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Mar 06 14:34:03.489 E ns/openshift-sdn pod/sdn-j276d node/ip-10-0-150-166.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ybrid proxy: syncProxyRules start\nI0306 14:32:55.020379    2908 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0306 14:32:55.089377    2908 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:32:55.089401    2908 proxier.go:350] userspace syncProxyRules took 68.999013ms\nI0306 14:32:55.089434    2908 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:33:25.089669    2908 proxy.go:334] hybrid proxy: syncProxyRules start\nI0306 14:33:25.256207    2908 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0306 14:33:25.324464    2908 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:33:25.324487    2908 proxier.go:350] userspace syncProxyRules took 68.255561ms\nI0306 14:33:25.324498    2908 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:33:52.635928    2908 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.2:6443 10.130.0.5:6443]\nI0306 14:33:52.635976    2908 roundrobin.go:218] Delete endpoint 10.129.0.15:6443 for service "openshift-multus/multus-admission-controller:"\nI0306 14:33:52.636046    2908 proxy.go:334] hybrid proxy: syncProxyRules start\nI0306 14:33:52.810103    2908 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0306 14:33:52.877497    2908 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:33:52.877518    2908 proxier.go:350] userspace syncProxyRules took 67.390986ms\nI0306 14:33:52.877527    2908 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:34:02.025954    2908 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0306 14:34:02.831823    2908 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0306 14:34:02.831863    2908 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Mar 06 14:34:06.932 E ns/openshift-sdn pod/sdn-controller-xt8tv node/ip-10-0-154-22.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): with: too old resource version: 11489 (15198)\nI0306 14:20:20.081412       1 vnids.go:115] Allocated netid 295349 for namespace "e2e-k8s-sig-apps-deployment-upgrade-8040"\nI0306 14:20:20.088540       1 vnids.go:115] Allocated netid 9650391 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-3236"\nI0306 14:20:20.116482       1 vnids.go:115] Allocated netid 7512261 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-7127"\nI0306 14:20:20.131093       1 vnids.go:115] Allocated netid 2273373 for namespace "e2e-k8s-sig-apps-job-upgrade-1494"\nI0306 14:20:20.144871       1 vnids.go:115] Allocated netid 2693822 for namespace "e2e-control-plane-upgrade-4623"\nI0306 14:20:20.158133       1 vnids.go:115] Allocated netid 16150840 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-9011"\nI0306 14:20:20.173860       1 vnids.go:115] Allocated netid 15665591 for namespace "e2e-k8s-service-upgrade-3597"\nI0306 14:20:20.186633       1 vnids.go:115] Allocated netid 4129640 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-7429"\nW0306 14:25:34.622970       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 15198 (19312)\nW0306 14:25:34.875487       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 16993 (19315)\nW0306 14:29:20.003930       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 17316 (20163)\nW0306 14:29:20.366144       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 19315 (22550)\nW0306 14:29:20.366328       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 19312 (22551)\n
Mar 06 14:34:09.534 E ns/openshift-sdn pod/sdn-controller-9vt5m node/ip-10-0-131-229.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0306 14:05:30.097906       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Mar 06 14:34:20.622 E ns/openshift-sdn pod/sdn-g9n98 node/ip-10-0-131-229.us-east-2.compute.internal container=sdn container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:34:23.608 E ns/openshift-multus pod/multus-admission-controller-hv7wg node/ip-10-0-131-229.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Mar 06 14:34:27.597 E ns/openshift-sdn pod/sdn-b948w node/ip-10-0-130-221.us-east-2.compute.internal container=sdn container exited with code 255 (Error): syncProxyRules complete\nI0306 14:33:52.901603    2923 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:33:52.901627    2923 proxier.go:350] userspace syncProxyRules took 68.151095ms\nI0306 14:33:52.901638    2923 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:34:03.370352    2923 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-upgrade-3597/service-test: to [10.128.2.13:80]\nI0306 14:34:03.370394    2923 roundrobin.go:218] Delete endpoint 10.131.0.22:80 for service "e2e-k8s-service-upgrade-3597/service-test:"\nI0306 14:34:03.370452    2923 proxy.go:334] hybrid proxy: syncProxyRules start\nI0306 14:34:03.565542    2923 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0306 14:34:03.632956    2923 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:34:03.632982    2923 proxier.go:350] userspace syncProxyRules took 67.416362ms\nI0306 14:34:03.632997    2923 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:34:17.366896    2923 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-upgrade-3597/service-test: to [10.128.2.13:80 10.131.0.22:80]\nI0306 14:34:17.366926    2923 roundrobin.go:218] Delete endpoint 10.131.0.22:80 for service "e2e-k8s-service-upgrade-3597/service-test:"\nI0306 14:34:17.366988    2923 proxy.go:334] hybrid proxy: syncProxyRules start\nI0306 14:34:17.534687    2923 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0306 14:34:17.605859    2923 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:34:17.605882    2923 proxier.go:350] userspace syncProxyRules took 71.167009ms\nI0306 14:34:17.605892    2923 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:34:27.133640    2923 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0306 14:34:27.133679    2923 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Mar 06 14:35:09.468 E ns/openshift-multus pod/multus-pbwt6 node/ip-10-0-130-221.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Mar 06 14:35:16.581 E ns/openshift-sdn pod/sdn-nhjvb node/ip-10-0-131-229.us-east-2.compute.internal container=sdn container exited with code 255 (Error): proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:35:08.653816   76654 proxier.go:350] userspace syncProxyRules took 72.604899ms\nI0306 14:35:08.653828   76654 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:35:14.171354   76654 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.2:6443 10.129.0.62:6443 10.130.0.68:6443]\nI0306 14:35:14.171393   76654 roundrobin.go:218] Delete endpoint 10.130.0.68:6443 for service "openshift-multus/multus-admission-controller:"\nI0306 14:35:14.171491   76654 proxy.go:334] hybrid proxy: syncProxyRules start\nI0306 14:35:14.229988   76654 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.62:6443 10.130.0.68:6443]\nI0306 14:35:14.230021   76654 roundrobin.go:218] Delete endpoint 10.128.0.2:6443 for service "openshift-multus/multus-admission-controller:"\nI0306 14:35:14.378301   76654 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0306 14:35:14.458315   76654 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:35:14.458350   76654 proxier.go:350] userspace syncProxyRules took 80.023129ms\nI0306 14:35:14.458367   76654 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:35:14.458383   76654 proxy.go:334] hybrid proxy: syncProxyRules start\nI0306 14:35:14.660455   76654 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0306 14:35:14.732467   76654 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:35:14.732495   76654 proxier.go:350] userspace syncProxyRules took 71.999778ms\nI0306 14:35:14.732509   76654 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:35:15.575088   76654 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0306 14:35:15.575150   76654 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Mar 06 14:35:35.238 E ns/openshift-sdn pod/ovs-6xd92 node/ip-10-0-129-121.us-east-2.compute.internal container=openvswitch container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:35:41.250 E ns/openshift-sdn pod/sdn-kblsr node/ip-10-0-129-121.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ] Adding new service port "openshift-monitoring/grafana:https" at 172.30.195.99:3000/TCP\nI0306 14:35:16.305874   55878 service.go:357] Adding new service port "openshift-kube-controller-manager-operator/metrics:https" at 172.30.109.139:443/TCP\nI0306 14:35:16.305900   55878 service.go:357] Adding new service port "openshift-monitoring/prometheus-k8s:web" at 172.30.179.200:9091/TCP\nI0306 14:35:16.305918   55878 service.go:357] Adding new service port "openshift-monitoring/prometheus-k8s:tenancy" at 172.30.179.200:9092/TCP\nI0306 14:35:16.306138   55878 proxier.go:705] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0306 14:35:16.509769   55878 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:35:16.509804   55878 proxier.go:350] userspace syncProxyRules took 205.088796ms\nI0306 14:35:16.530146   55878 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:35:16.530178   55878 proxier.go:350] userspace syncProxyRules took 225.271591ms\nI0306 14:35:16.601118   55878 proxier.go:1524] Opened local port "nodePort for e2e-k8s-service-upgrade-3597/service-test:" (:31123/tcp)\nI0306 14:35:16.601287   55878 proxier.go:1524] Opened local port "nodePort for openshift-ingress/router-default:http" (:30231/tcp)\nI0306 14:35:16.601438   55878 proxier.go:1524] Opened local port "nodePort for openshift-ingress/router-default:https" (:32009/tcp)\nI0306 14:35:16.641798   55878 healthcheck.go:151] Opening healthcheck "openshift-ingress/router-default" on port 31594\nI0306 14:35:16.822877   55878 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0306 14:35:16.822917   55878 cmd.go:173] openshift-sdn network plugin registering startup\nI0306 14:35:16.823066   55878 cmd.go:177] openshift-sdn network plugin ready\nI0306 14:35:40.183597   55878 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0306 14:35:40.183647   55878 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Mar 06 14:35:45.221 E ns/openshift-multus pod/multus-admission-controller-26jbg node/ip-10-0-132-205.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Mar 06 14:35:45.672 E ns/openshift-service-ca pod/service-serving-cert-signer-66d7855575-whjm2 node/ip-10-0-131-229.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Mar 06 14:35:49.665 E ns/openshift-service-ca pod/apiservice-cabundle-injector-59f697c649-mvx6q node/ip-10-0-131-229.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Mar 06 14:35:50.673 E ns/openshift-multus pod/multus-fc9nf node/ip-10-0-131-229.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Mar 06 14:36:03.342 E ns/openshift-sdn pod/sdn-nvmkh node/ip-10-0-154-22.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ok 71.222217ms\nI0306 14:35:14.443683   77176 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:35:14.443694   77176 proxy.go:334] hybrid proxy: syncProxyRules start\nI0306 14:35:14.685469   77176 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0306 14:35:14.758758   77176 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:35:14.758784   77176 proxier.go:350] userspace syncProxyRules took 73.289539ms\nI0306 14:35:14.758799   77176 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:35:44.759035   77176 proxy.go:334] hybrid proxy: syncProxyRules start\nI0306 14:35:44.936427   77176 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0306 14:35:45.009492   77176 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:35:45.009519   77176 proxier.go:350] userspace syncProxyRules took 73.069739ms\nI0306 14:35:45.009533   77176 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:35:57.272676   77176 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.62:6443 10.129.0.62:6443 10.130.0.68:6443]\nI0306 14:35:57.272840   77176 roundrobin.go:218] Delete endpoint 10.128.0.62:6443 for service "openshift-multus/multus-admission-controller:"\nI0306 14:35:57.272960   77176 proxy.go:334] hybrid proxy: syncProxyRules start\nI0306 14:35:57.464506   77176 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0306 14:35:57.541216   77176 proxier.go:371] userspace proxy: processing 0 service events\nI0306 14:35:57.541242   77176 proxier.go:350] userspace syncProxyRules took 76.710315ms\nI0306 14:35:57.541255   77176 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0306 14:36:02.383620   77176 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0306 14:36:02.383660   77176 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Mar 06 14:36:10.361 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-5694dd55b6-t8p5v node/ip-10-0-154-22.us-east-2.compute.internal container=manager container exited with code 1 (Error): ft-machine-api-gcp secret=openshift-machine-api/gcp-cloud-credentials\ntime="2020-03-06T14:29:37Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-gcp secret=openshift-machine-api/gcp-cloud-credentials\ntime="2020-03-06T14:29:37Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-03-06T14:29:37Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-03-06T14:29:37Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-03-06T14:29:37Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-03-06T14:29:37Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-03-06T14:29:37Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-03-06T14:29:37Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\ntime="2020-03-06T14:31:37Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-03-06T14:31:37Z" level=info msg="reconcile complete" controller=metrics elapsed=1.28668ms\ntime="2020-03-06T14:33:37Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-03-06T14:33:37Z" level=info msg="reconcile complete" controller=metrics elapsed=1.210949ms\ntime="2020-03-06T14:35:37Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-03-06T14:35:37Z" level=info msg="reconcile complete" controller=metrics elapsed=1.255105ms\ntime="2020-03-06T14:36:09Z" level=error msg="leader election lostunable to run the manager"\n
Mar 06 14:36:29.420 E ns/openshift-multus pod/multus-w5895 node/ip-10-0-154-22.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Mar 06 14:37:15.845 E ns/openshift-multus pod/multus-rhp54 node/ip-10-0-150-166.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Mar 06 14:38:18.171 E ns/openshift-machine-config-operator pod/machine-config-operator-5db9665cd9-n4wk7 node/ip-10-0-131-229.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): lversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 16617 (19618)\nW0306 14:27:22.905323       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 18963 (19928)\nW0306 14:27:23.017203       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 15278 (19381)\nW0306 14:27:23.017578       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 16608 (19623)\nW0306 14:27:23.496900       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 16612 (19616)\nW0306 14:27:23.497052       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 16606 (19621)\nW0306 14:27:23.538939       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRole ended with: too old resource version: 15294 (19381)\nW0306 14:27:23.725820       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRoleBinding ended with: too old resource version: 15298 (19381)\nW0306 14:27:23.914291       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 16615 (19628)\nW0306 14:27:23.914540       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 16614 (19634)\nW0306 14:27:23.914701       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 17350 (19373)\n
Mar 06 14:40:23.097 E ns/openshift-machine-config-operator pod/machine-config-daemon-dkpt6 node/ip-10-0-130-221.us-east-2.compute.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:40:23.097 E ns/openshift-machine-config-operator pod/machine-config-daemon-dkpt6 node/ip-10-0-130-221.us-east-2.compute.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:40:39.236 E ns/openshift-machine-config-operator pod/machine-config-daemon-89mcs node/ip-10-0-150-166.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 06 14:40:56.111 E ns/openshift-machine-config-operator pod/machine-config-daemon-jtlj5 node/ip-10-0-132-205.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 06 14:41:02.686 E ns/openshift-machine-config-operator pod/machine-config-daemon-gjqzc node/ip-10-0-131-229.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 06 14:41:19.381 E ns/openshift-machine-config-operator pod/machine-config-daemon-j5vn9 node/ip-10-0-154-22.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 06 14:43:13.132 E ns/openshift-machine-config-operator pod/machine-config-server-w6pl2 node/ip-10-0-131-229.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0306 14:08:32.634651       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-167-gd5599de7-dirty (d5599de7a6b86ec385e0f9c849e93977fcb4eeb8)\nI0306 14:08:32.635796       1 api.go:51] Launching server on :22624\nI0306 14:08:32.635848       1 api.go:51] Launching server on :22623\n
Mar 06 14:43:23.696 E ns/openshift-monitoring pod/prometheus-adapter-6747d9c6d8-g65td node/ip-10-0-150-166.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0306 14:29:10.935068       1 adapter.go:93] successfully using in-cluster auth\nI0306 14:29:11.923151       1 secure_serving.go:116] Serving securely on [::]:6443\n
Mar 06 14:43:23.803 E ns/openshift-ingress pod/router-default-56f89bc58d-lh52f node/ip-10-0-150-166.us-east-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:40:31.923089       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:40:38.162748       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:40:43.139989       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:40:55.303696       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:41:00.296091       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:41:05.296993       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:41:18.777207       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:41:23.777378       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:41:34.478868       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:43:22.331104       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Mar 06 14:43:24.213 E ns/openshift-service-ca pod/service-serving-cert-signer-66d7855575-whjm2 node/ip-10-0-131-229.us-east-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Mar 06 14:43:24.294 E ns/openshift-insights pod/insights-operator-5cd6d8b94-dslbn node/ip-10-0-131-229.us-east-2.compute.internal container=operator container exited with code 2 (Error): nd cloud.openshift.com token\nI0306 14:39:41.850303       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0306 14:39:41.929702       1 status.go:298] The operator is healthy\nI0306 14:39:41.929759       1 status.go:373] No status update necessary, objects are identical\nI0306 14:40:03.310301       1 httplog.go:90] GET /metrics: (9.226679ms) 200 [Prometheus/2.14.0 10.131.0.29:56634]\nI0306 14:40:06.724222       1 httplog.go:90] GET /metrics: (1.926881ms) 200 [Prometheus/2.14.0 10.129.2.24:59098]\nI0306 14:40:33.307960       1 httplog.go:90] GET /metrics: (6.854719ms) 200 [Prometheus/2.14.0 10.131.0.29:56634]\nI0306 14:40:36.723937       1 httplog.go:90] GET /metrics: (1.675309ms) 200 [Prometheus/2.14.0 10.129.2.24:59098]\nI0306 14:41:03.307733       1 httplog.go:90] GET /metrics: (6.635175ms) 200 [Prometheus/2.14.0 10.131.0.29:56634]\nI0306 14:41:06.724181       1 httplog.go:90] GET /metrics: (1.628591ms) 200 [Prometheus/2.14.0 10.129.2.24:59098]\nI0306 14:41:33.307918       1 httplog.go:90] GET /metrics: (6.85074ms) 200 [Prometheus/2.14.0 10.131.0.29:56634]\nI0306 14:41:36.724027       1 httplog.go:90] GET /metrics: (1.547885ms) 200 [Prometheus/2.14.0 10.129.2.24:59098]\nI0306 14:41:41.931117       1 status.go:298] The operator is healthy\nI0306 14:41:41.931188       1 status.go:373] No status update necessary, objects are identical\nI0306 14:42:03.307866       1 httplog.go:90] GET /metrics: (6.79596ms) 200 [Prometheus/2.14.0 10.131.0.29:56634]\nI0306 14:42:06.734378       1 httplog.go:90] GET /metrics: (2.756002ms) 200 [Prometheus/2.14.0 10.129.2.24:59098]\nI0306 14:42:33.307789       1 httplog.go:90] GET /metrics: (6.690794ms) 200 [Prometheus/2.14.0 10.131.0.29:56634]\nI0306 14:42:36.724009       1 httplog.go:90] GET /metrics: (1.688332ms) 200 [Prometheus/2.14.0 10.129.2.24:59098]\nI0306 14:43:03.307244       1 httplog.go:90] GET /metrics: (6.17551ms) 200 [Prometheus/2.14.0 10.131.0.29:56634]\nI0306 14:43:06.724115       1 httplog.go:90] GET /metrics: (1.570949ms) 200 [Prometheus/2.14.0 10.129.2.24:59098]\n
Mar 06 14:43:25.489 E ns/openshift-service-ca pod/configmap-cabundle-injector-6ff5764769-kmc8t node/ip-10-0-131-229.us-east-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Mar 06 14:43:25.662 E ns/openshift-console-operator pod/console-operator-9587cfd84-m8nr8 node/ip-10-0-131-229.us-east-2.compute.internal container=console-operator container exited with code 255 (Error):     1 status.go:73] SyncLoopRefreshProgressing InProgress Working toward version 0.0.1-2020-03-06-134125\nE0306 14:31:05.736794       1 status.go:73] DeploymentAvailable FailedUpdate 2 replicas ready at version 0.0.1-2020-03-06-134125\nE0306 14:31:05.894869       1 status.go:73] SyncLoopRefreshProgressing InProgress Working toward version 0.0.1-2020-03-06-134125\nE0306 14:31:05.894903       1 status.go:73] DeploymentAvailable FailedUpdate 2 replicas ready at version 0.0.1-2020-03-06-134125\nI0306 14:31:14.427394       1 status_controller.go:165] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-03-06T14:13:15Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-03-06T14:31:14Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-03-06T14:31:14Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-03-06T14:13:15Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0306 14:31:14.438488       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"8621a4a5-9121-4f20-9bcc-bbe48290921f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False (""),Available changed from False to True ("")\nW0306 14:42:20.443961       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 29468 (29539)\nW0306 14:42:23.156565       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 29539 (29553)\nI0306 14:43:24.668601       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0306 14:43:24.668747       1 leaderelection.go:66] leaderelection lost\n
Mar 06 14:43:27.495 E ns/openshift-machine-config-operator pod/machine-config-controller-67c5974474-gwk9z node/ip-10-0-131-229.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): openshift.io/v1  } {MachineConfig  99-master-c4876ed0-6437-42de-99bf-505320873315-registries  machineconfiguration.openshift.io/v1  } {MachineConfig  99-master-ssh  machineconfiguration.openshift.io/v1  }]\nI0306 14:43:16.054156       1 render_controller.go:516] Pool worker: now targeting: rendered-worker-bca405ec34daf954bddf1a67baa1b528\nI0306 14:43:16.056908       1 render_controller.go:516] Pool master: now targeting: rendered-master-49f6fb04eae73cae1d8e3c9ec68fc312\nI0306 14:43:21.054274       1 node_controller.go:758] Setting node ip-10-0-150-166.us-east-2.compute.internal to desired config rendered-worker-bca405ec34daf954bddf1a67baa1b528\nI0306 14:43:21.056988       1 node_controller.go:758] Setting node ip-10-0-131-229.us-east-2.compute.internal to desired config rendered-master-49f6fb04eae73cae1d8e3c9ec68fc312\nI0306 14:43:21.076608       1 node_controller.go:452] Pool master: node ip-10-0-131-229.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-49f6fb04eae73cae1d8e3c9ec68fc312\nI0306 14:43:21.078612       1 node_controller.go:452] Pool worker: node ip-10-0-150-166.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-bca405ec34daf954bddf1a67baa1b528\nI0306 14:43:22.090077       1 node_controller.go:452] Pool worker: node ip-10-0-150-166.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0306 14:43:22.093752       1 node_controller.go:452] Pool master: node ip-10-0-131-229.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0306 14:43:22.105960       1 node_controller.go:433] Pool worker: node ip-10-0-150-166.us-east-2.compute.internal is now reporting unready: node ip-10-0-150-166.us-east-2.compute.internal is reporting Unschedulable\nI0306 14:43:22.118401       1 node_controller.go:433] Pool master: node ip-10-0-131-229.us-east-2.compute.internal is now reporting unready: node ip-10-0-131-229.us-east-2.compute.internal is reporting Unschedulable\n
Mar 06 14:43:27.977 E ns/openshift-machine-config-operator pod/machine-config-server-lzds8 node/ip-10-0-132-205.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0306 14:08:32.608583       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-167-gd5599de7-dirty (d5599de7a6b86ec385e0f9c849e93977fcb4eeb8)\nI0306 14:08:32.609588       1 api.go:51] Launching server on :22624\nI0306 14:08:32.609652       1 api.go:51] Launching server on :22623\nI0306 14:10:34.758131       1 api.go:97] Pool worker requested by 10.0.131.89:1380\nI0306 14:10:39.664754       1 api.go:97] Pool worker requested by 10.0.153.204:4625\nI0306 14:10:40.326857       1 api.go:97] Pool worker requested by 10.0.153.204:6624\n
Mar 06 14:43:32.878 E ns/openshift-machine-config-operator pod/machine-config-server-456cj node/ip-10-0-154-22.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0306 14:09:03.881035       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-167-gd5599de7-dirty (d5599de7a6b86ec385e0f9c849e93977fcb4eeb8)\nI0306 14:09:03.881953       1 api.go:51] Launching server on :22624\nI0306 14:09:03.882038       1 api.go:51] Launching server on :22623\n
Mar 06 14:43:42.561 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-130-221.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-03-06T14:43:28.690Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-03-06T14:43:28.698Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-03-06T14:43:28.698Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-03-06T14:43:28.699Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-03-06T14:43:28.699Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-03-06T14:43:28.699Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-03-06T14:43:28.699Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-03-06T14:43:28.699Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-03-06T14:43:28.699Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-03-06T14:43:28.699Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-03-06T14:43:28.699Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-03-06T14:43:28.699Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-03-06T14:43:28.699Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-03-06T14:43:28.699Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-03-06T14:43:28.700Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-03-06T14:43:28.700Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-03-06
Mar 06 14:44:32.851 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Grafana host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io grafana)
Mar 06 14:45:47.011 E ns/openshift-cluster-node-tuning-operator pod/tuned-nfrx2 node/ip-10-0-150-166.us-east-2.compute.internal container=tuned container exited with code 143 (Error): mping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:40:51.968214     610 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:40:52.081113     610 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:43:24.828499     610 openshift-tuned.go:550] Pod (openshift-console/downloads-6f986d8866-bmsl6) labels changed node wide: true\nI0306 14:43:26.966919     610 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:43:26.969680     610 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:43:27.086147     610 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:43:30.753425     610 openshift-tuned.go:550] Pod (openshift-ingress/router-default-56f89bc58d-lh52f) labels changed node wide: true\nI0306 14:43:31.966881     610 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:43:31.968245     610 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:43:32.080451     610 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:43:54.809707     610 openshift-tuned.go:550] Pod (e2e-k8s-service-upgrade-3597/service-test-wz4tg) labels changed node wide: true\nI0306 14:43:56.966883     610 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:43:56.969296     610 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:43:57.081318     610 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:44:00.734627     610 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-1494/foo-92d46) labels changed node wide: true\nI0306 14:44:01.401608     610 openshift-tuned.go:137] Received signal: terminated\nI0306 14:44:01.401661     610 openshift-tuned.go:304] Sending TERM to PID 1264\n
Mar 06 14:45:47.060 E ns/openshift-monitoring pod/node-exporter-c587h node/ip-10-0-150-166.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 3-06T14:31:03Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-03-06T14:31:03Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Mar 06 14:45:47.073 E ns/openshift-sdn pod/ovs-ttbhn node/ip-10-0-150-166.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): onnmgr|INFO|br0<->unix#524: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:43:23.233Z|00141|connmgr|INFO|br0<->unix#527: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:43:23.269Z|00142|bridge|INFO|bridge br0: deleted interface vethedba4ace on port 7\n2020-03-06T14:43:23.316Z|00143|connmgr|INFO|br0<->unix#530: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:43:23.384Z|00144|connmgr|INFO|br0<->unix#533: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:43:23.409Z|00145|bridge|INFO|bridge br0: deleted interface vethf7772ffd on port 4\n2020-03-06T14:43:23.468Z|00146|connmgr|INFO|br0<->unix#536: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:43:23.525Z|00147|connmgr|INFO|br0<->unix#539: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:43:23.558Z|00148|bridge|INFO|bridge br0: deleted interface vethf6878559 on port 9\n2020-03-06T14:43:23.397Z|00027|jsonrpc|WARN|unix#472: receive error: Connection reset by peer\n2020-03-06T14:43:23.397Z|00028|reconnect|WARN|unix#472: connection dropped (Connection reset by peer)\n2020-03-06T14:43:23.402Z|00029|jsonrpc|WARN|unix#473: receive error: Connection reset by peer\n2020-03-06T14:43:23.402Z|00030|reconnect|WARN|unix#473: connection dropped (Connection reset by peer)\n2020-03-06T14:43:23.541Z|00031|jsonrpc|WARN|unix#478: receive error: Connection reset by peer\n2020-03-06T14:43:23.541Z|00032|reconnect|WARN|unix#478: connection dropped (Connection reset by peer)\n2020-03-06T14:43:52.557Z|00149|connmgr|INFO|br0<->unix#563: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:43:52.584Z|00150|connmgr|INFO|br0<->unix#566: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:43:52.609Z|00151|bridge|INFO|bridge br0: deleted interface veth52abd478 on port 10\n2020-03-06T14:43:52.814Z|00152|connmgr|INFO|br0<->unix#569: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:43:52.841Z|00153|connmgr|INFO|br0<->unix#572: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:43:52.862Z|00154|bridge|INFO|bridge br0: deleted interface veth48f5a0b9 on port 5\nTerminated\n
Mar 06 14:45:47.102 E ns/openshift-multus pod/multus-rvx2x node/ip-10-0-150-166.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 06 14:45:47.136 E ns/openshift-machine-config-operator pod/machine-config-daemon-2q47r node/ip-10-0-150-166.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 06 14:45:55.723 E ns/openshift-machine-config-operator pod/machine-config-daemon-2q47r node/ip-10-0-150-166.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 06 14:45:55.901 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-229.us-east-2.compute.internal node/ip-10-0-131-229.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): vcc: required revision has been compacted\nE0306 14:43:35.960034       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:43:35.961871       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:43:35.962003       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:43:35.962108       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:43:36.003734       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:43:36.003905       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:43:36.004059       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0306 14:43:36.214153       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-131-229.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0306 14:43:36.214279       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0306 14:43:36.266235       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0306 14:43:36.266517       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nW0306 14:43:36.288201       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.132.205 10.0.154.22]\nI0306 14:43:36.306666       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-131-229.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n
Mar 06 14:45:55.901 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-229.us-east-2.compute.internal node/ip-10-0-131-229.us-east-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0306 14:29:22.686938       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Mar 06 14:45:55.901 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-229.us-east-2.compute.internal node/ip-10-0-131-229.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0306 14:39:28.763813       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:39:28.764259       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0306 14:39:28.972197       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:39:28.972733       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Mar 06 14:45:55.939 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-229.us-east-2.compute.internal node/ip-10-0-131-229.us-east-2.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error): I0306 14:29:26.820859       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0306 14:29:26.828952       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0306 14:29:26.829665       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nE0306 14:29:28.592792       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: configmaps "cluster-policy-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\n
Mar 06 14:45:55.939 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-229.us-east-2.compute.internal node/ip-10-0-131-229.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:42:20.401551       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:42:20.401955       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:42:30.436899       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:42:30.437387       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:42:40.446223       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:42:40.446581       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:42:50.455915       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:42:50.456783       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:43:00.465267       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:43:00.465651       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:43:10.474860       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:43:10.475314       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:43:20.484182       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:43:20.484571       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:43:30.497672       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:43:30.498023       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Mar 06 14:45:55.939 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-229.us-east-2.compute.internal node/ip-10-0-131-229.us-east-2.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): penshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1583503684" (2020-03-06 14:08:12 +0000 UTC to 2022-03-06 14:08:13 +0000 UTC (now=2020-03-06 14:29:26.231510311 +0000 UTC))\nI0306 14:29:26.231586       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt\nI0306 14:29:26.231754       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\nI0306 14:29:26.232001       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1583504966" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583504965" (2020-03-06 13:29:24 +0000 UTC to 2021-03-06 13:29:24 +0000 UTC (now=2020-03-06 14:29:26.23196579 +0000 UTC))\nI0306 14:29:26.232154       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1583504966" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583504965" (2020-03-06 13:29:24 +0000 UTC to 2021-03-06 13:29:24 +0000 UTC (now=2020-03-06 14:29:26.232133451 +0000 UTC))\nI0306 14:29:26.232195       1 secure_serving.go:178] Serving securely on [::]:10257\nI0306 14:29:26.232252       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0306 14:29:26.231545       1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\nI0306 14:29:26.232771       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nE0306 14:29:28.588123       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Mar 06 14:45:55.956 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-229.us-east-2.compute.internal node/ip-10-0-131-229.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): t-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1583503684" (2020-03-06 14:08:14 +0000 UTC to 2022-03-06 14:08:15 +0000 UTC (now=2020-03-06 14:29:29.207060587 +0000 UTC))\nI0306 14:29:29.207534       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1583504966" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583504965" (2020-03-06 13:29:23 +0000 UTC to 2021-03-06 13:29:23 +0000 UTC (now=2020-03-06 14:29:29.207513182 +0000 UTC))\nI0306 14:29:29.207720       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1583504966" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583504965" (2020-03-06 13:29:23 +0000 UTC to 2021-03-06 13:29:23 +0000 UTC (now=2020-03-06 14:29:29.207699535 +0000 UTC))\nI0306 14:29:29.293058       1 node_tree.go:93] Added node "ip-10-0-129-121.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0306 14:29:29.293306       1 node_tree.go:93] Added node "ip-10-0-130-221.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0306 14:29:29.293466       1 node_tree.go:93] Added node "ip-10-0-131-229.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0306 14:29:29.293646       1 node_tree.go:93] Added node "ip-10-0-132-205.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0306 14:29:29.293827       1 node_tree.go:93] Added node "ip-10-0-150-166.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2b" to NodeTree\nI0306 14:29:29.293936       1 node_tree.go:93] Added node "ip-10-0-154-22.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2b" to NodeTree\nI0306 14:29:29.382893       1 leaderelection.go:241] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n
Mar 06 14:45:56.036 E ns/openshift-monitoring pod/node-exporter-l5jj8 node/ip-10-0-131-229.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 3-06T14:29:38Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-03-06T14:29:38Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Mar 06 14:45:56.062 E ns/openshift-cluster-node-tuning-operator pod/tuned-6645x node/ip-10-0-131-229.us-east-2.compute.internal container=tuned container exited with code 143 (Error): de: false\nI0306 14:43:23.052455   65527 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-4-ip-10-0-131-229.us-east-2.compute.internal) labels changed node wide: false\nI0306 14:43:23.053617   65527 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-5-ip-10-0-131-229.us-east-2.compute.internal) labels changed node wide: false\nI0306 14:43:23.068159   65527 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-6-ip-10-0-131-229.us-east-2.compute.internal) labels changed node wide: false\nI0306 14:43:23.273120   65527 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-6-ip-10-0-131-229.us-east-2.compute.internal) labels changed node wide: false\nI0306 14:43:23.623536   65527 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-5-ip-10-0-131-229.us-east-2.compute.internal) labels changed node wide: false\nI0306 14:43:23.834717   65527 openshift-tuned.go:550] Pod (openshift-cluster-version/version--rsc7b-hnllm) labels changed node wide: true\nI0306 14:43:27.753633   65527 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:43:27.755296   65527 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:43:27.959748   65527 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:43:28.597747   65527 openshift-tuned.go:550] Pod (openshift-console/console-56c5d4b44f-ptrq9) labels changed node wide: true\nI0306 14:43:32.753733   65527 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:43:32.755212   65527 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:43:32.883894   65527 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:43:34.472326   65527 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-controller-67c5974474-gwk9z) labels changed node wide: true\n
Mar 06 14:45:56.087 E ns/openshift-controller-manager pod/controller-manager-hrzfm node/ip-10-0-131-229.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Mar 06 14:45:56.113 E ns/openshift-sdn pod/sdn-controller-ghwv8 node/ip-10-0-131-229.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0306 14:34:12.850378       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Mar 06 14:45:56.145 E ns/openshift-multus pod/multus-admission-controller-bd6h4 node/ip-10-0-131-229.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Mar 06 14:45:56.166 E ns/openshift-sdn pod/ovs-drhj5 node/ip-10-0-131-229.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): e br0: deleted interface veth412f23aa on port 19\n2020-03-06T14:43:24.550Z|00182|connmgr|INFO|br0<->unix#544: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:43:24.600Z|00183|connmgr|INFO|br0<->unix#547: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:43:24.635Z|00184|bridge|INFO|bridge br0: deleted interface veth7e3cd7ff on port 5\n2020-03-06T14:43:25.038Z|00022|jsonrpc|WARN|Dropped 7 log messages in last 487 seconds (most recently, 487 seconds ago) due to excessive rate\n2020-03-06T14:43:25.038Z|00023|jsonrpc|WARN|unix#478: send error: Broken pipe\n2020-03-06T14:43:25.038Z|00024|reconnect|WARN|unix#478: connection dropped (Broken pipe)\n2020-03-06T14:43:25.023Z|00185|connmgr|INFO|br0<->unix#550: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:43:25.082Z|00186|connmgr|INFO|br0<->unix#553: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:43:25.125Z|00187|bridge|INFO|bridge br0: deleted interface vethb984f992 on port 11\n2020-03-06T14:43:26.549Z|00188|connmgr|INFO|br0<->unix#558: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:43:26.579Z|00189|connmgr|INFO|br0<->unix#561: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:43:26.608Z|00190|bridge|INFO|bridge br0: deleted interface veth75244cc0 on port 7\n2020-03-06T14:43:27.039Z|00191|connmgr|INFO|br0<->unix#564: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:43:27.076Z|00192|connmgr|INFO|br0<->unix#567: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:43:27.101Z|00193|bridge|INFO|bridge br0: deleted interface veth56150a02 on port 21\n2020-03-06T14:43:28.020Z|00025|jsonrpc|WARN|unix#498: receive error: Connection reset by peer\n2020-03-06T14:43:28.020Z|00026|reconnect|WARN|unix#498: connection dropped (Connection reset by peer)\n2020-03-06T14:43:27.967Z|00194|connmgr|INFO|br0<->unix#570: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:43:28.001Z|00195|connmgr|INFO|br0<->unix#573: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:43:28.028Z|00196|bridge|INFO|bridge br0: deleted interface vethe3f6244c on port 18\nTerminated\n
Mar 06 14:45:56.216 E ns/openshift-multus pod/multus-ctgp7 node/ip-10-0-131-229.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 06 14:45:56.272 E ns/openshift-machine-config-operator pod/machine-config-daemon-bwzxx node/ip-10-0-131-229.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 06 14:45:56.294 E ns/openshift-machine-config-operator pod/machine-config-server-r68qc node/ip-10-0-131-229.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0306 14:43:26.023611       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-167-gd5599de7-dirty (d5599de7a6b86ec385e0f9c849e93977fcb4eeb8)\nI0306 14:43:26.024830       1 api.go:51] Launching server on :22624\nI0306 14:43:26.024869       1 api.go:51] Launching server on :22623\n
Mar 06 14:46:05.922 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Mar 06 14:46:06.108 E ns/openshift-ingress pod/router-default-56f89bc58d-w8jc4 node/ip-10-0-129-121.us-east-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:43:38.698661       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:43:43.684311       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:43:48.687628       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:44:12.848519       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:44:17.842386       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:44:38.113245       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:44:43.109256       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:45:50.725902       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:45:55.713036       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0306 14:46:00.714011       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Mar 06 14:46:07.112 E ns/openshift-monitoring pod/grafana-786dffcbbf-474lr node/ip-10-0-129-121.us-east-2.compute.internal container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:46:07.112 E ns/openshift-monitoring pod/grafana-786dffcbbf-474lr node/ip-10-0-129-121.us-east-2.compute.internal container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:46:07.144 E ns/openshift-monitoring pod/telemeter-client-675dd6c55f-g8xtv node/ip-10-0-129-121.us-east-2.compute.internal container=reload container exited with code 2 (Error): 
Mar 06 14:46:07.144 E ns/openshift-monitoring pod/telemeter-client-675dd6c55f-g8xtv node/ip-10-0-129-121.us-east-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Mar 06 14:46:07.256 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-121.us-east-2.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:46:07.256 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-121.us-east-2.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:46:07.256 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-121.us-east-2.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:46:08.134 E ns/openshift-machine-config-operator pod/machine-config-daemon-bwzxx node/ip-10-0-131-229.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 06 14:46:14.053 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-79f6d89ff8-9tmct node/ip-10-0-132-205.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ge":"StaticPodsDegraded: nodes/ip-10-0-131-229.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-131-229.us-east-2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-03-06T14:30:14Z","message":"Progressing: 3 nodes are at revision 7","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-03-06T14:11:28Z","message":"Available: 3 nodes are active; 3 nodes are at revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-03-06T14:07:58Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0306 14:46:05.575821       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"ec3e5abd-aecd-49da-ad7b-1698bafbfb8b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-131-229.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-131-229.us-east-2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: The master nodes not ready: node \"ip-10-0-131-229.us-east-2.compute.internal\" not ready since 2020-03-06 14:45:55 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "StaticPodsDegraded: nodes/ip-10-0-131-229.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-131-229.us-east-2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0306 14:46:12.826254       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0306 14:46:12.826330       1 leaderelection.go:66] leaderelection lost\n
Mar 06 14:46:17.107 E ns/openshift-operator-lifecycle-manager pod/packageserver-57c8496c84-hj89s node/ip-10-0-131-229.us-east-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:46:17.332 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-746b458787-5sbww node/ip-10-0-132-205.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-746b458787-5sbww_cdefb8be-d1fb-420b-9184-855f8fd5a0cb/kube-controller-manager-operator/0.log": lstat /var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-746b458787-5sbww_cdefb8be-d1fb-420b-9184-855f8fd5a0cb/kube-controller-manager-operator/0.log: no such file or directory
Mar 06 14:46:17.458 E ns/openshift-service-ca pod/apiservice-cabundle-injector-59f697c649-tld4n node/ip-10-0-132-205.us-east-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:46:18.392 E ns/openshift-console pod/console-56c5d4b44f-vpnbc node/ip-10-0-132-205.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/03/6 14:43:32 cmd/main: cookies are secure!\n2020/03/6 14:43:37 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/03/6 14:43:52 auth: error contacting auth provider (retrying in 10s): Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/03/6 14:44:07 auth: error contacting auth provider (retrying in 10s): Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/03/6 14:44:22 auth: error contacting auth provider (retrying in 10s): Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020/03/6 14:44:32 cmd/main: Binding to [::]:8443...\n2020/03/6 14:44:32 cmd/main: using TLS\n
Mar 06 14:47:47.050 E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:48:46.353 E ns/openshift-cluster-node-tuning-operator pod/tuned-7p4cx node/ip-10-0-129-121.us-east-2.compute.internal container=tuned container exited with code 143 (Error): 0] Pod (e2e-k8s-sig-apps-deployment-upgrade-8040/dp-657fc4b57d-vs9pc) labels changed node wide: true\nI0306 14:46:07.300419   40966 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:46:07.304054   40966 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:46:07.438260   40966 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:46:07.439370   40966 openshift-tuned.go:550] Pod (openshift-monitoring/kube-state-metrics-6f456b9bbb-dp7n8) labels changed node wide: true\nI0306 14:46:12.300411   40966 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:46:12.301960   40966 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:46:12.418578   40966 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:46:19.361864   40966 openshift-tuned.go:550] Pod (openshift-monitoring/alertmanager-main-0) labels changed node wide: true\nI0306 14:46:22.300444   40966 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:46:22.301858   40966 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:46:22.412336   40966 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:46:39.309813   40966 openshift-tuned.go:550] Pod (e2e-k8s-service-upgrade-3597/service-test-pm8sv) labels changed node wide: true\nI0306 14:46:42.300570   40966 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:46:42.301986   40966 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:46:42.411758   40966 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:46:59.309423   40966 openshift-tuned.go:550] Pod (openshift-console/downloads-6f986d8866-grv77) labels changed node wide: true\n
Mar 06 14:48:46.387 E ns/openshift-monitoring pod/node-exporter-lkxw8 node/ip-10-0-129-121.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 3-06T14:30:51Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:51Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Mar 06 14:48:46.420 E ns/openshift-sdn pod/ovs-h8x98 node/ip-10-0-129-121.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): |INFO|br0<->unix#602: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:05.590Z|00162|connmgr|INFO|br0<->unix#605: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:05.624Z|00163|bridge|INFO|bridge br0: deleted interface veth0c4d6c8b on port 3\n2020-03-06T14:46:05.681Z|00164|connmgr|INFO|br0<->unix#608: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:05.732Z|00165|connmgr|INFO|br0<->unix#611: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:05.758Z|00166|bridge|INFO|bridge br0: deleted interface veth6a60bf25 on port 4\n2020-03-06T14:46:05.804Z|00167|connmgr|INFO|br0<->unix#614: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:05.859Z|00168|connmgr|INFO|br0<->unix#617: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:05.886Z|00169|bridge|INFO|bridge br0: deleted interface veth7ab6ecc8 on port 14\n2020-03-06T14:46:05.927Z|00170|connmgr|INFO|br0<->unix#620: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:05.962Z|00171|connmgr|INFO|br0<->unix#623: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:05.985Z|00172|bridge|INFO|bridge br0: deleted interface veth8b19819e on port 6\n2020-03-06T14:46:06.028Z|00173|connmgr|INFO|br0<->unix#626: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:06.071Z|00174|connmgr|INFO|br0<->unix#629: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:06.094Z|00175|bridge|INFO|bridge br0: deleted interface veth49a82578 on port 7\n2020-03-06T14:46:06.171Z|00176|connmgr|INFO|br0<->unix#632: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:06.218Z|00177|connmgr|INFO|br0<->unix#635: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:06.253Z|00178|bridge|INFO|bridge br0: deleted interface vethcbfadf28 on port 15\n2020-03-06T14:46:34.986Z|00179|connmgr|INFO|br0<->unix#659: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:35.013Z|00180|connmgr|INFO|br0<->unix#662: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:35.033Z|00181|bridge|INFO|bridge br0: deleted interface vetheeee622d on port 17\nTerminated\n
Mar 06 14:48:46.436 E ns/openshift-multus pod/multus-9jqcw node/ip-10-0-129-121.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 06 14:48:46.487 E ns/openshift-machine-config-operator pod/machine-config-daemon-n2njp node/ip-10-0-129-121.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 06 14:48:49.043 E ns/openshift-multus pod/multus-9jqcw node/ip-10-0-129-121.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Mar 06 14:48:55.994 E ns/openshift-machine-config-operator pod/machine-config-daemon-n2njp node/ip-10-0-129-121.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 06 14:48:59.336 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-205.us-east-2.compute.internal node/ip-10-0-132-205.us-east-2.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error): I0306 14:27:26.587747       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0306 14:27:26.597081       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0306 14:27:26.597898       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0306 14:27:31.129508       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: configmaps "cluster-policy-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\n
Mar 06 14:48:59.336 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-205.us-east-2.compute.internal node/ip-10-0-132-205.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:45:30.633628       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:45:30.634110       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:45:40.644627       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:45:40.645824       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:45:50.658354       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:45:50.658740       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:46:00.668265       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:46:00.668623       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:46:10.689307       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:46:10.689764       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:46:20.715057       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:46:20.715369       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:46:30.729937       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:46:30.730347       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:46:40.739676       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:46:40.740057       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Mar 06 14:48:59.336 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-205.us-east-2.compute.internal node/ip-10-0-132-205.us-east-2.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): ' Scaled down replica set packageserver-58dc7fff6d to 0\nI0306 14:46:26.536220       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-58dc7fff6d", UID:"ac493d34-44f5-411f-a8d5-c949b84fc4bf", APIVersion:"apps/v1", ResourceVersion:"32813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-58dc7fff6d-gd8xg\nI0306 14:46:26.543188       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-5455c6b7bf, need 2, creating 1\nI0306 14:46:26.543815       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"bedd7d4e-e12d-473a-ae49-6d54655363ce", APIVersion:"apps/v1", ResourceVersion:"32816", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-5455c6b7bf to 2\nI0306 14:46:26.563999       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-5455c6b7bf", UID:"86b9aac4-a762-4796-be04-376fcd650f1f", APIVersion:"apps/v1", ResourceVersion:"32818", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-5455c6b7bf-kr27m\nE0306 14:46:29.671032       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0306 14:46:39.531770       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-57954c6694, need 3, creating 1\nI0306 14:46:39.541016       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-57954c6694", UID:"44f280ee-d9e2-48b4-922f-68f22942a653", APIVersion:"apps/v1", ResourceVersion:"33020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-57954c6694-xpksp\n
Mar 06 14:48:59.404 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-205.us-east-2.compute.internal node/ip-10-0-132-205.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): erving-signer@1583503684" (2020-03-06 14:08:14 +0000 UTC to 2022-03-06 14:08:15 +0000 UTC (now=2020-03-06 14:27:32.313984168 +0000 UTC))\nI0306 14:27:32.314433       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1583504846" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583504846" (2020-03-06 13:27:25 +0000 UTC to 2021-03-06 13:27:25 +0000 UTC (now=2020-03-06 14:27:32.314399061 +0000 UTC))\nI0306 14:27:32.314683       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1583504846" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583504846" (2020-03-06 13:27:25 +0000 UTC to 2021-03-06 13:27:25 +0000 UTC (now=2020-03-06 14:27:32.314661991 +0000 UTC))\nI0306 14:27:32.393502       1 node_tree.go:93] Added node "ip-10-0-131-229.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0306 14:27:32.395164       1 node_tree.go:93] Added node "ip-10-0-132-205.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0306 14:27:32.395324       1 node_tree.go:93] Added node "ip-10-0-150-166.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2b" to NodeTree\nI0306 14:27:32.395381       1 node_tree.go:93] Added node "ip-10-0-154-22.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2b" to NodeTree\nI0306 14:27:32.395447       1 node_tree.go:93] Added node "ip-10-0-129-121.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0306 14:27:32.395493       1 node_tree.go:93] Added node "ip-10-0-130-221.us-east-2.compute.internal" in group "us-east-2:\x00:us-east-2a" to NodeTree\nI0306 14:27:32.513914       1 leaderelection.go:241] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\nW0306 14:43:37.045933       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: too old resource version: 20173 (30495)\n
Mar 06 14:48:59.523 E ns/openshift-cluster-node-tuning-operator pod/tuned-mcgn6 node/ip-10-0-132-205.us-east-2.compute.internal container=tuned container exited with code 143 (Error): rue\nI0306 14:46:21.647746   65525 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:46:21.649862   65525 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:46:22.292519   65525 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:46:23.002991   65525 openshift-tuned.go:550] Pod (openshift-insights/insights-operator-5cd6d8b94-prdvt) labels changed node wide: true\nI0306 14:46:26.647684   65525 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:46:26.649632   65525 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:46:26.852302   65525 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:46:32.875726   65525 openshift-tuned.go:550] Pod (openshift-console-operator/console-operator-9587cfd84-6vcd4) labels changed node wide: true\nI0306 14:46:36.647663   65525 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:46:36.649089   65525 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:46:36.770300   65525 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:46:41.558641   65525 openshift-tuned.go:550] Pod (openshift-machine-config-operator/etcd-quorum-guard-57954c6694-npml4) labels changed node wide: true\nI0306 14:46:41.647683   65525 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:46:41.649121   65525 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:46:41.773025   65525 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:46:42.845257   65525 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-132-205.us-east-2.compute.internal) labels changed node wide: true\n
Mar 06 14:48:59.571 E ns/openshift-monitoring pod/node-exporter-rfdjq node/ip-10-0-132-205.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 3-06T14:29:58Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-03-06T14:29:58Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Mar 06 14:48:59.667 E ns/openshift-controller-manager pod/controller-manager-qrbp9 node/ip-10-0-132-205.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Mar 06 14:48:59.696 E ns/openshift-sdn pod/sdn-controller-48gwl node/ip-10-0-132-205.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0306 14:34:05.189051       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Mar 06 14:48:59.718 E ns/openshift-multus pod/multus-g9vwq node/ip-10-0-132-205.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 06 14:48:59.754 E ns/openshift-sdn pod/ovs-g8mrh node/ip-10-0-132-205.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): :46:17.351Z|00044|reconnect|WARN|unix#671: connection dropped (Broken pipe)\n2020-03-06T14:46:17.360Z|00045|reconnect|WARN|unix#672: connection dropped (Broken pipe)\n2020-03-06T14:46:17.533Z|00046|reconnect|WARN|unix#676: connection dropped (Broken pipe)\n2020-03-06T14:46:17.508Z|00237|connmgr|INFO|br0<->unix#792: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:17.554Z|00238|bridge|INFO|bridge br0: deleted interface vethd9314e08 on port 14\n2020-03-06T14:46:17.629Z|00239|connmgr|INFO|br0<->unix#795: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:17.674Z|00240|connmgr|INFO|br0<->unix#798: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:17.702Z|00241|bridge|INFO|bridge br0: deleted interface veth5ae913dd on port 27\n2020-03-06T14:46:18.035Z|00242|connmgr|INFO|br0<->unix#801: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:18.075Z|00243|connmgr|INFO|br0<->unix#804: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:18.107Z|00244|bridge|INFO|bridge br0: deleted interface vethf86979b5 on port 21\n2020-03-06T14:46:18.691Z|00047|reconnect|WARN|unix#692: connection dropped (Connection reset by peer)\n2020-03-06T14:46:18.599Z|00245|connmgr|INFO|br0<->unix#807: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:18.665Z|00246|connmgr|INFO|br0<->unix#810: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:18.699Z|00247|bridge|INFO|bridge br0: deleted interface veth99c2a2bb on port 23\n2020-03-06T14:46:18.926Z|00248|connmgr|INFO|br0<->unix#815: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:19.008Z|00249|connmgr|INFO|br0<->unix#818: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:19.115Z|00250|bridge|INFO|bridge br0: deleted interface vethdd0dd5e1 on port 5\n2020-03-06T14:46:19.674Z|00251|connmgr|INFO|br0<->unix#822: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:46:19.735Z|00252|connmgr|INFO|br0<->unix#825: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:46:19.780Z|00253|bridge|INFO|bridge br0: deleted interface veth12f1b9ae on port 26\nTerminated\n
Mar 06 14:48:59.811 E ns/openshift-multus pod/multus-admission-controller-wspzs node/ip-10-0-132-205.us-east-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Mar 06 14:48:59.859 E ns/openshift-machine-config-operator pod/machine-config-daemon-k8mv5 node/ip-10-0-132-205.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 06 14:48:59.874 E ns/openshift-machine-config-operator pod/machine-config-server-qbrkv node/ip-10-0-132-205.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0306 14:43:32.035615       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-167-gd5599de7-dirty (d5599de7a6b86ec385e0f9c849e93977fcb4eeb8)\nI0306 14:43:32.037201       1 api.go:51] Launching server on :22624\nI0306 14:43:32.037526       1 api.go:51] Launching server on :22623\n
Mar 06 14:49:03.296 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-205.us-east-2.compute.internal node/ip-10-0-132-205.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): r", Name:"kube-apiserver-ip-10-0-132-205.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0306 14:46:43.028603       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nW0306 14:46:43.058707       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.0.131.229 10.0.154.22]\nI0306 14:46:43.082646       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-132-205.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0306 14:46:43.119582       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0306 14:46:43.119874       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0306 14:46:43.120069       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0306 14:46:43.120253       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0306 14:46:43.120365       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0306 14:46:43.120433       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0306 14:46:43.120578       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0306 14:46:43.120597       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nI0306 14:46:43.120754       1 log.go:172] httputil: ReverseProxy read error during body copy: unexpected EOF\nE0306 14:46:43.124397       1 reflector.go:280] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io)\n
Mar 06 14:49:03.296 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-205.us-east-2.compute.internal node/ip-10-0-132-205.us-east-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0306 14:26:19.958524       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Mar 06 14:49:03.296 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-205.us-east-2.compute.internal node/ip-10-0-132-205.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0306 14:37:32.190284       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:37:32.190664       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0306 14:37:32.397758       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:37:32.400130       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Mar 06 14:49:03.423 E ns/openshift-marketplace pod/redhat-operators-85b45d85dd-fplt7 node/ip-10-0-130-221.us-east-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Mar 06 14:49:04.805 E ns/openshift-marketplace pod/certified-operators-5bcf47bb59-s6fwx node/ip-10-0-130-221.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Mar 06 14:49:04.833 E ns/openshift-marketplace pod/community-operators-59c675bc9d-nrk5l node/ip-10-0-130-221.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Mar 06 14:49:04.853 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-130-221.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/03/06 14:43:40 Watching directory: "/etc/alertmanager/config"\n
Mar 06 14:49:04.853 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-130-221.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/03/06 14:43:40 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/06 14:43:40 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/06 14:43:40 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/06 14:44:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/03/06 14:44:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/06 14:44:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/06 14:44:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/06 14:44:10 http.go:96: HTTPS: listening on [::]:9095\n
Mar 06 14:49:04.933 E ns/openshift-monitoring pod/prometheus-adapter-6747d9c6d8-ntxcn node/ip-10-0-130-221.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0306 14:43:24.835275       1 adapter.go:93] successfully using in-cluster auth\nI0306 14:43:25.319758       1 secure_serving.go:116] Serving securely on [::]:6443\n
Mar 06 14:49:04.958 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-130-221.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/03/06 14:29:18 Watching directory: "/etc/alertmanager/config"\n
Mar 06 14:49:04.958 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-130-221.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/03/06 14:29:18 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/06 14:29:18 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/06 14:29:18 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/06 14:29:18 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/03/06 14:29:18 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/06 14:29:18 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/06 14:29:18 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/06 14:29:18 http.go:96: HTTPS: listening on [::]:9095\n2020/03/06 14:34:16 reverseproxy.go:447: http: proxy error: context canceled\n2020/03/06 14:34:40 reverseproxy.go:447: http: proxy error: context canceled\n2020/03/06 14:34:46 reverseproxy.go:447: http: proxy error: context canceled\n
Mar 06 14:49:06.744 E ns/openshift-multus pod/multus-g9vwq node/ip-10-0-132-205.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Mar 06 14:49:09.244 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Mar 06 14:49:11.811 E ns/openshift-machine-config-operator pod/machine-config-daemon-k8mv5 node/ip-10-0-132-205.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 06 14:49:18.446 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-7755qv8f6 node/ip-10-0-154-22.us-east-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:49:18.556 E ns/openshift-authentication pod/oauth-openshift-94dc75d9b-fpg5m node/ip-10-0-154-22.us-east-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:49:19.606 E ns/openshift-cluster-machine-approver pod/machine-approver-689d89756f-skpkl node/ip-10-0-154-22.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): 4:29:27.870696       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0306 14:29:27.870759       1 main.go:236] Starting Machine Approver\nI0306 14:29:27.971009       1 main.go:146] CSR csr-dzr5v added\nI0306 14:29:27.971035       1 main.go:149] CSR csr-dzr5v is already approved\nI0306 14:29:27.971056       1 main.go:146] CSR csr-h582q added\nI0306 14:29:27.971066       1 main.go:149] CSR csr-h582q is already approved\nI0306 14:29:27.971089       1 main.go:146] CSR csr-hf6l8 added\nI0306 14:29:27.971100       1 main.go:149] CSR csr-hf6l8 is already approved\nI0306 14:29:27.971112       1 main.go:146] CSR csr-n984f added\nI0306 14:29:27.971123       1 main.go:149] CSR csr-n984f is already approved\nI0306 14:29:27.971138       1 main.go:146] CSR csr-vprll added\nI0306 14:29:27.971149       1 main.go:149] CSR csr-vprll is already approved\nI0306 14:29:27.971161       1 main.go:146] CSR csr-5s5wx added\nI0306 14:29:27.971171       1 main.go:149] CSR csr-5s5wx is already approved\nI0306 14:29:27.971184       1 main.go:146] CSR csr-7b9z5 added\nI0306 14:29:27.971194       1 main.go:149] CSR csr-7b9z5 is already approved\nI0306 14:29:27.971206       1 main.go:146] CSR csr-c4rwr added\nI0306 14:29:27.971217       1 main.go:149] CSR csr-c4rwr is already approved\nI0306 14:29:27.971230       1 main.go:146] CSR csr-csqt7 added\nI0306 14:29:27.971240       1 main.go:149] CSR csr-csqt7 is already approved\nI0306 14:29:27.971252       1 main.go:146] CSR csr-cxhvx added\nI0306 14:29:27.971263       1 main.go:149] CSR csr-cxhvx is already approved\nI0306 14:29:27.971277       1 main.go:146] CSR csr-2fbgr added\nI0306 14:29:27.971287       1 main.go:149] CSR csr-2fbgr is already approved\nI0306 14:29:27.971299       1 main.go:146] CSR csr-8m8v4 added\nI0306 14:29:27.971310       1 main.go:149] CSR csr-8m8v4 is already approved\nW0306 14:46:43.953952       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 19377 (33157)\n
Mar 06 14:49:21.028 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-5d966bbd94-87hdm node/ip-10-0-154-22.us-east-2.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-5d966bbd94-87hdm_ec4940b0-3ac1-472e-b570-b2e7b24d1ed5/cluster-node-tuning-operator/0.log": lstat /var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-5d966bbd94-87hdm_ec4940b0-3ac1-472e-b570-b2e7b24d1ed5/cluster-node-tuning-operator/0.log: no such file or directory
Mar 06 14:49:22.183 E ns/openshift-console pod/console-56c5d4b44f-8r4gg node/ip-10-0-154-22.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020/03/6 14:31:08 cmd/main: cookies are secure!\n2020/03/6 14:31:08 cmd/main: Binding to [::]:8443...\n2020/03/6 14:31:08 cmd/main: using TLS\n
Mar 06 14:49:23.687 E ns/openshift-machine-api pod/machine-api-controllers-85964b597c-vwj77 node/ip-10-0-154-22.us-east-2.compute.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:49:23.687 E ns/openshift-machine-api pod/machine-api-controllers-85964b597c-vwj77 node/ip-10-0-154-22.us-east-2.compute.internal container=machine-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:49:23.687 E ns/openshift-machine-api pod/machine-api-controllers-85964b597c-vwj77 node/ip-10-0-154-22.us-east-2.compute.internal container=nodelink-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:49:23.687 E ns/openshift-machine-api pod/machine-api-controllers-85964b597c-vwj77 node/ip-10-0-154-22.us-east-2.compute.internal container=machine-healthcheck-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 06 14:49:25.863 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7c64758cf-w2jq6 node/ip-10-0-154-22.us-east-2.compute.internal container=operator container exited with code 255 (Error): onGoRestful\nI0306 14:48:36.468996       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0306 14:48:36.470869       1 httplog.go:90] GET /metrics: (8.865084ms) 200 [Prometheus/2.14.0 10.128.2.27:52896]\nI0306 14:48:39.022021       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0306 14:48:44.209667       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0306 14:48:44.209699       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0306 14:48:44.216475       1 httplog.go:90] GET /metrics: (2.205668ms) 200 [Prometheus/2.14.0 10.131.0.15:47292]\nI0306 14:48:49.044240       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0306 14:48:59.056244       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0306 14:49:09.108476       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0306 14:49:14.215485       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0306 14:49:14.215519       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0306 14:49:14.216875       1 httplog.go:90] GET /metrics: (7.417763ms) 200 [Prometheus/2.14.0 10.131.0.15:47292]\nI0306 14:49:19.134478       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0306 14:49:24.291416       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0306 14:49:24.291470       1 leaderelection.go:66] leaderelection lost\n
Mar 06 14:49:35.420 E kube-apiserver Kube API started failing: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Mar 06 14:49:35.505 E kube-apiserver failed contacting the API: Get https://api.ci-op-0c30lzpk-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=35651&timeout=7m30s&timeoutSeconds=450&watch=true: dial tcp 13.59.141.202:6443: connect: connection refused
Mar 06 14:49:43.726 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-5694dd55b6-qmrjp node/ip-10-0-132-205.us-east-2.compute.internal container=manager container exited with code 1 (Error): Copying system trust bundle\ntime="2020-03-06T14:49:40Z" level=debug msg="debug logging enabled"\ntime="2020-03-06T14:49:40Z" level=info msg="setting up client for manager"\ntime="2020-03-06T14:49:40Z" level=info msg="setting up manager"\ntime="2020-03-06T14:49:40Z" level=error msg="Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refusedunable to set up overall controller manager"\n
Mar 06 14:49:43.856 E ns/openshift-machine-api pod/machine-api-controllers-85964b597c-4zztv node/ip-10-0-132-205.us-east-2.compute.internal container=machine-healthcheck-controller container exited with code 255 (Error): 
Mar 06 14:49:43.918 E ns/openshift-marketplace pod/marketplace-operator-85dd5d54d4-5cpjl node/ip-10-0-132-205.us-east-2.compute.internal container=marketplace-operator container exited with code 1 (Error): 
Mar 06 14:49:43.956 E ns/openshift-monitoring pod/cluster-monitoring-operator-5ff7d464f5-nqx2d node/ip-10-0-132-205.us-east-2.compute.internal container=cluster-monitoring-operator container exited with code 1 (Error): W0306 14:49:40.769379       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\n
Mar 06 14:51:27.351 E ns/openshift-cluster-node-tuning-operator pod/tuned-hqs5h node/ip-10-0-130-221.us-east-2.compute.internal container=tuned container exited with code 143 (Error): 73 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-74d9c58f4d-vw98s) labels changed node wide: true\nI0306 14:46:08.672783   45373 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:46:08.674403   45373 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:46:08.796934   45373 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:47:40.668257   45373 openshift-tuned.go:852] Lowering resyncPeriod to 61\nI0306 14:49:05.035756   45373 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-74d9c58f4d-vw98s) labels changed node wide: true\nI0306 14:49:08.672740   45373 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:49:08.674139   45373 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:49:08.787730   45373 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:49:14.525470   45373 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-adapter-6747d9c6d8-m494l) labels changed node wide: false\nI0306 14:49:14.563460   45373 openshift-tuned.go:550] Pod (openshift-ingress/router-default-56f89bc58d-jwbzx) labels changed node wide: true\nI0306 14:49:18.672767   45373 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:49:18.675422   45373 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:49:18.795018   45373 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0306 14:49:35.410018   45373 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0306 14:49:35.415248   45373 openshift-tuned.go:881] Pod event watch channel closed.\nI0306 14:49:35.415275   45373 openshift-tuned.go:883] Increasing resyncPeriod to 122\nI0306 14:49:43.571382   45373 openshift-tuned.go:137] Received signal: terminated\n
Mar 06 14:51:27.360 E ns/openshift-monitoring pod/node-exporter-5s94n node/ip-10-0-130-221.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 3-06T14:30:27Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:27Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Mar 06 14:51:27.370 E ns/openshift-sdn pod/ovs-hznzf node/ip-10-0-130-221.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): 6T14:49:04.281Z|00183|connmgr|INFO|br0<->unix#833: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:04.321Z|00184|connmgr|INFO|br0<->unix#836: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:49:04.345Z|00185|bridge|INFO|bridge br0: deleted interface veth94e3f153 on port 20\n2020-03-06T14:49:04.378Z|00186|connmgr|INFO|br0<->unix#839: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:04.415Z|00187|connmgr|INFO|br0<->unix#842: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:49:04.437Z|00188|bridge|INFO|bridge br0: deleted interface vethd896e56e on port 6\n2020-03-06T14:49:32.467Z|00189|connmgr|INFO|br0<->unix#865: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:32.505Z|00190|connmgr|INFO|br0<->unix#868: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:49:32.535Z|00191|bridge|INFO|bridge br0: deleted interface vethc754f073 on port 4\n2020-03-06T14:49:32.581Z|00192|connmgr|INFO|br0<->unix#871: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:32.630Z|00193|connmgr|INFO|br0<->unix#874: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:49:32.660Z|00194|bridge|INFO|bridge br0: deleted interface vethda019f9a on port 18\n2020-03-06T14:49:32.703Z|00195|connmgr|INFO|br0<->unix#877: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:32.746Z|00196|connmgr|INFO|br0<->unix#880: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:49:32.770Z|00197|bridge|INFO|bridge br0: deleted interface veth2bdd8c9e on port 8\n2020-03-06T14:49:32.643Z|00021|jsonrpc|WARN|Dropped 6 log messages in last 897 seconds (most recently, 896 seconds ago) due to excessive rate\n2020-03-06T14:49:32.643Z|00022|jsonrpc|WARN|unix#780: receive error: Connection reset by peer\n2020-03-06T14:49:32.643Z|00023|reconnect|WARN|unix#780: connection dropped (Connection reset by peer)\n2020-03-06T14:49:32.764Z|00024|jsonrpc|WARN|unix#786: receive error: Connection reset by peer\n2020-03-06T14:49:32.764Z|00025|reconnect|WARN|unix#786: connection dropped (Connection reset by peer)\nExiting ovs-vswitchd (59296).\nTerminated\n
Mar 06 14:51:27.397 E ns/openshift-multus pod/multus-9mfg2 node/ip-10-0-130-221.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 06 14:51:27.435 E ns/openshift-machine-config-operator pod/machine-config-daemon-xtqmd node/ip-10-0-130-221.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 06 14:51:29.981 E ns/openshift-multus pod/multus-9mfg2 node/ip-10-0-130-221.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Mar 06 14:51:36.672 E ns/openshift-machine-config-operator pod/machine-config-daemon-xtqmd node/ip-10-0-130-221.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 06 14:51:47.050 E openshift-apiserver OpenShift API is not responding to GET requests
Mar 06 14:51:49.396 E ns/openshift-cluster-node-tuning-operator pod/tuned-glbdx node/ip-10-0-154-22.us-east-2.compute.internal container=tuned container exited with code 143 (Error): shift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-7-ip-10-0-154-22.us-east-2.compute.internal) labels changed node wide: false\nI0306 14:49:18.968663   65753 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-7-ip-10-0-154-22.us-east-2.compute.internal) labels changed node wide: false\nI0306 14:49:19.867538   65753 openshift-tuned.go:550] Pod (openshift-machine-api/cluster-autoscaler-operator-646bc99886-qzqpz) labels changed node wide: true\nI0306 14:49:22.411701   65753 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:49:22.415333   65753 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:49:22.877546   65753 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:49:23.885073   65753 openshift-tuned.go:550] Pod (openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-79f6d89ff8-q5msf) labels changed node wide: true\nI0306 14:49:27.411686   65753 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:49:27.413185   65753 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:49:27.537153   65753 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:49:29.279216   65753 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-operator-747657f9bd-m7ph2) labels changed node wide: true\nI0306 14:49:32.411732   65753 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:49:32.415955   65753 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:49:32.618358   65753 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:49:33.872428   65753 openshift-tuned.go:550] Pod (openshift-machine-config-operator/etcd-quorum-guard-57954c6694-hfwn7) labels changed node wide: true\n
Mar 06 14:51:49.437 E ns/openshift-monitoring pod/node-exporter-n5dfj node/ip-10-0-154-22.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): 3-06T14:30:42Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-03-06T14:30:42Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Mar 06 14:51:49.460 E ns/openshift-controller-manager pod/controller-manager-vt2x9 node/ip-10-0-154-22.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Mar 06 14:51:49.581 E ns/openshift-sdn pod/sdn-controller-4cjz8 node/ip-10-0-154-22.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): 901413       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0306 14:34:08.918812       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"f381e5c8-45ab-428f-8406-1a98b829e5aa", ResourceVersion:"25759", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719100325, loc:(*time.Location)(0x2b7dcc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-154-22\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-03-06T14:05:25Z\",\"renewTime\":\"2020-03-06T14:34:08Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-154-22 became leader'\nI0306 14:34:08.918979       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0306 14:34:08.930524       1 master.go:51] Initializing SDN master\nI0306 14:34:08.948745       1 network_controller.go:60] Started OpenShift Network Controller\nW0306 14:46:44.059087       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 19372 (33161)\nW0306 14:46:44.066110       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 20103 (33161)\n
Mar 06 14:51:49.608 E ns/openshift-multus pod/multus-admission-controller-ttjs6 node/ip-10-0-154-22.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Mar 06 14:51:49.635 E ns/openshift-sdn pod/ovs-2vp5v node/ip-10-0-154-22.us-east-2.compute.internal container=openvswitch container exited with code 143 (Error): 3ed7a843 on port 9\n2020-03-06T14:49:24.968Z|00279|connmgr|INFO|br0<->unix#916: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:25.023Z|00280|connmgr|INFO|br0<->unix#919: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:49:25.057Z|00281|bridge|INFO|bridge br0: deleted interface veth08219098 on port 19\n2020-03-06T14:49:25.111Z|00282|connmgr|INFO|br0<->unix#922: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:25.167Z|00283|connmgr|INFO|br0<->unix#925: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:49:25.194Z|00284|bridge|INFO|bridge br0: deleted interface veth58861e0f on port 33\n2020-03-06T14:49:25.240Z|00285|connmgr|INFO|br0<->unix#928: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:25.297Z|00286|connmgr|INFO|br0<->unix#931: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:49:25.321Z|00287|bridge|INFO|bridge br0: deleted interface veth60f68ead on port 25\n2020-03-06T14:49:26.607Z|00288|bridge|INFO|bridge br0: added interface veth8b0c47b3 on port 34\n2020-03-06T14:49:26.663Z|00289|connmgr|INFO|br0<->unix#936: 5 flow_mods in the last 0 s (5 adds)\n2020-03-06T14:49:26.709Z|00290|connmgr|INFO|br0<->unix#940: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:26.712Z|00291|connmgr|INFO|br0<->unix#942: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-03-06T14:49:29.893Z|00292|connmgr|INFO|br0<->unix#946: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:29.955Z|00293|connmgr|INFO|br0<->unix#949: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-06T14:49:29.991Z|00294|bridge|INFO|bridge br0: deleted interface veth8b0c47b3 on port 34\n2020-03-06T14:49:33.542Z|00295|bridge|INFO|bridge br0: added interface veth7c86d607 on port 35\n2020-03-06T14:49:33.593Z|00296|connmgr|INFO|br0<->unix#955: 5 flow_mods in the last 0 s (5 adds)\n2020-03-06T14:49:33.658Z|00297|connmgr|INFO|br0<->unix#959: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-06T14:49:33.663Z|00298|connmgr|INFO|br0<->unix#961: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\nExiting ovs-vswitchd (81886).\nTerminated\n
Mar 06 14:51:49.733 E ns/openshift-multus pod/multus-v6rnr node/ip-10-0-154-22.us-east-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 06 14:51:49.767 E ns/openshift-machine-config-operator pod/machine-config-daemon-cmkk7 node/ip-10-0-154-22.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 06 14:51:49.794 E ns/openshift-machine-config-operator pod/machine-config-server-5q28h node/ip-10-0-154-22.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0306 14:43:34.874521       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-167-gd5599de7-dirty (d5599de7a6b86ec385e0f9c849e93977fcb4eeb8)\nI0306 14:43:34.875992       1 api.go:51] Launching server on :22624\nI0306 14:43:34.876119       1 api.go:51] Launching server on :22623\n
Mar 06 14:51:53.247 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-22.us-east-2.compute.internal node/ip-10-0-154-22.us-east-2.compute.internal container=cluster-policy-controller-7 container exited with code 1 (Error):  too old resource version: 24007 (33157)\nW0306 14:46:44.052627       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 19371 (33161)\nW0306 14:46:44.059305       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Job ended with: too old resource version: 19375 (33161)\nW0306 14:46:44.061858       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Namespace ended with: too old resource version: 19372 (33161)\nW0306 14:46:44.065770       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.PodTemplate ended with: too old resource version: 19370 (33161)\nI0306 14:47:14.146890       1 trace.go:81] Trace[1698038544]: "Reflector github.com/openshift/client-go/route/informers/externalversions/factory.go:101 ListAndWatch" (started: 2020-03-06 14:46:44.145071918 +0000 UTC m=+974.639568846) (total time: 30.001790516s):\nTrace[1698038544]: [30.001790516s] [30.001790516s] END\nE0306 14:47:14.146925       1 reflector.go:126] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to list *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nE0306 14:47:14.152272       1 reflector.go:270] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to watch *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io)\nE0306 14:47:18.173492       1 reflector.go:126] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io)\nE0306 14:47:21.244761       1 reflector.go:126] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to list *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io)\n
Mar 06 14:51:53.247 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-22.us-east-2.compute.internal node/ip-10-0-154-22.us-east-2.compute.internal container=kube-controller-manager-cert-syncer-7 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:48:20.859024       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:48:20.859395       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:48:30.870001       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:48:30.870357       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:48:40.880025       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:48:40.882935       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:48:50.893034       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:48:50.893366       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:49:00.905572       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:49:00.906196       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:49:10.916772       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:49:10.917319       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:49:20.945028       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:49:20.946132       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0306 14:49:30.953580       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:49:30.954194       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Mar 06 14:51:53.247 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-22.us-east-2.compute.internal node/ip-10-0-154-22.us-east-2.compute.internal container=kube-controller-manager-7 container exited with code 2 (Error): 0ed71a9a3a5c4de7ac2d62049eb6d7c\nI0306 14:49:26.855948       1 aws_loadbalancer.go:1386] Instances removed from load-balancer a0ed71a9a3a5c4de7ac2d62049eb6d7c\nI0306 14:49:27.046092       1 service_controller.go:703] Successfully updated 2 out of 2 load balancers to direct traffic to the updated set of nodes\nI0306 14:49:27.046280       1 event.go:255] Event(v1.ObjectReference{Kind:"Service", Namespace:"openshift-ingress", Name:"router-default", UID:"0ed71a9a-3a5c-4de7-ac2d-62049eb6d7ce", APIVersion:"v1", ResourceVersion:"9284", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\nI0306 14:49:29.410118       1 garbagecollector.go:405] processing item [v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-operator-lock, uid: 799085aa-6de8-4a52-8f68-9296ca2bdeba]\nI0306 14:49:29.418583       1 garbagecollector.go:518] delete object [v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-operator-lock, uid: 799085aa-6de8-4a52-8f68-9296ca2bdeba] with propagation policy Background\nI0306 14:49:29.916097       1 garbagecollector.go:405] processing item [v1/ConfigMap, namespace: openshift-cluster-node-tuning-operator, name: node-tuning-operator-lock, uid: 5d417176-9384-4234-a0bb-fef65e69d605]\nI0306 14:49:29.920091       1 garbagecollector.go:518] delete object [v1/ConfigMap, namespace: openshift-cluster-node-tuning-operator, name: node-tuning-operator-lock, uid: 5d417176-9384-4234-a0bb-fef65e69d605] with propagation policy Background\nI0306 14:49:32.151602       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-57954c6694, need 3, creating 1\nI0306 14:49:32.162390       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-57954c6694", UID:"44f280ee-d9e2-48b4-922f-68f22942a653", APIVersion:"apps/v1", ResourceVersion:"35645", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-57954c6694-rh6km\n
Mar 06 14:51:53.332 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-22.us-east-2.compute.internal node/ip-10-0-154-22.us-east-2.compute.internal container=scheduler container exited with code 2 (Error): >.".\nI0306 14:49:23.629014       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/catalog-operator-7894f8965f-v842v is bound successfully on node "ip-10-0-132-205.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0306 14:49:23.826151       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-5455c6b7bf-hkb2l is bound successfully on node "ip-10-0-132-205.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0306 14:49:24.063445       1 scheduler.go:667] pod openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator-7c64758cf-k9fn2 is bound successfully on node "ip-10-0-132-205.us-east-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0306 14:49:32.164505       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-57954c6694-rh6km: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0306 14:49:33.881011       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-57954c6694-rh6km: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Mar 06 14:51:53.383 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-22.us-east-2.compute.internal node/ip-10-0-154-22.us-east-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): ired revision has been compacted\nE0306 14:49:34.735368       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.735630       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.802124       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.854093       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.854425       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.854509       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.854800       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.854930       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.855213       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.855508       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.855589       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.855749       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0306 14:49:34.855895       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0306 14:49:35.196097       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-154-22.us-east-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0306 14:49:35.196331       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Mar 06 14:51:53.383 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-22.us-east-2.compute.internal node/ip-10-0-154-22.us-east-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0306 14:25:35.902071       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Mar 06 14:51:53.383 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-22.us-east-2.compute.internal node/ip-10-0-154-22.us-east-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0306 14:45:41.131859       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:45:41.132291       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0306 14:45:41.340021       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0306 14:45:41.340384       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Mar 06 14:51:53.423 E ns/openshift-monitoring pod/node-exporter-n5dfj node/ip-10-0-154-22.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Mar 06 14:51:54.516 E ns/openshift-multus pod/multus-v6rnr node/ip-10-0-154-22.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Mar 06 14:51:56.759 E ns/openshift-multus pod/multus-v6rnr node/ip-10-0-154-22.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Mar 06 14:52:00.770 E ns/openshift-machine-config-operator pod/machine-config-daemon-cmkk7 node/ip-10-0-154-22.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 06 14:52:03.504 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Prometheus host: getting Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io prometheus-k8s)
Mar 06 14:52:21.937 E ns/openshift-marketplace pod/certified-operators-5bcf47bb59-z9tnf node/ip-10-0-129-121.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Mar 06 14:52:30.956 E ns/openshift-marketplace pod/community-operators-59c675bc9d-qq8s4 node/ip-10-0-129-121.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Mar 06 14:52:47.192 E ns/openshift-cluster-node-tuning-operator pod/tuned-glbdx node/ip-10-0-154-22.us-east-2.compute.internal container=tuned container exited with code 143 (Error): Failed to execute operation: Unit file tuned.service does not exist.\nI0306 14:51:53.566622    3462 openshift-tuned.go:209] Extracting tuned profiles\nI0306 14:51:53.585145    3462 openshift-tuned.go:739] Resync period to pull node/pod labels: 63 [s]\nE0306 14:51:59.771103    3462 openshift-tuned.go:881] Get https://172.30.0.1:443/api/v1/nodes/ip-10-0-154-22.us-east-2.compute.internal: dial tcp 172.30.0.1:443: connect: no route to host\nI0306 14:51:59.771160    3462 openshift-tuned.go:883] Increasing resyncPeriod to 126\n
Mar 06 14:52:47.792 E ns/openshift-cluster-node-tuning-operator pod/tuned-6645x node/ip-10-0-131-229.us-east-2.compute.internal container=tuned container exited with code 143 (Error): vices cpu2, cpu3, cpu0, cpu1\n2020-03-06 14:47:56,780 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-03-06 14:47:56,784 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-03-06 14:47:56,788 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-03-06 14:47:56,790 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-03-06 14:47:56,955 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-03-06 14:47:56,972 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0306 14:49:16.531093    3854 openshift-tuned.go:550] Pod (openshift-machine-api/cluster-autoscaler-operator-646bc99886-bjsg9) labels changed node wide: true\nI0306 14:49:21.066377    3854 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:49:21.070291    3854 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:49:21.271313    3854 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:49:21.421407    3854 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-operator-b447b88db-zxkph) labels changed node wide: true\nI0306 14:49:26.065817    3854 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0306 14:49:26.069078    3854 openshift-tuned.go:441] Getting recommended profile...\nI0306 14:49:26.290063    3854 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0306 14:49:35.412769    3854 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0306 14:49:35.447491    3854 openshift-tuned.go:881] Get https://172.30.0.1:443/api/v1/nodes/ip-10-0-131-229.us-east-2.compute.internal: unexpected EOF\nI0306 14:49:35.447732    3854 openshift-tuned.go:883] Increasing resyncPeriod to 208\n
Mar 06 14:52:47.866 E ns/openshift-cluster-node-tuning-operator pod/tuned-hqs5h node/ip-10-0-130-221.us-east-2.compute.internal container=tuned container exited with code 143 (Error): Failed to execute operation: Unit file tuned.service does not exist.\nI0306 14:51:29.609046    2960 openshift-tuned.go:209] Extracting tuned profiles\nI0306 14:51:29.660930    2960 openshift-tuned.go:739] Resync period to pull node/pod labels: 68 [s]\nE0306 14:51:35.827187    2960 openshift-tuned.go:881] Get https://172.30.0.1:443/api/v1/nodes/ip-10-0-130-221.us-east-2.compute.internal: dial tcp 172.30.0.1:443: connect: no route to host\nI0306 14:51:35.827314    2960 openshift-tuned.go:883] Increasing resyncPeriod to 136\n