ResultSUCCESS
Tests 5 failed / 20 succeeded
Started2020-04-16 09:49
Elapsed1h19m
Work namespaceci-op-kgh86rjz
Refs release-4.4:322a876e
125:6289f754
pod95c6c28c-7fc7-11ea-bacb-0a58ac10460f
repoopenshift/cluster-node-tuning-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 37m14s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 7s of 31m55s (0%):

Apr 16 10:34:50.701 E ns/e2e-k8s-service-lb-available-4722 svc/service-test Service stopped responding to GET requests on reused connections
Apr 16 10:34:51.700 E ns/e2e-k8s-service-lb-available-4722 svc/service-test Service is not responding to GET requests on reused connections
Apr 16 10:34:51.873 I ns/e2e-k8s-service-lb-available-4722 svc/service-test Service started responding to GET requests on reused connections
Apr 16 10:35:01.701 E ns/e2e-k8s-service-lb-available-4722 svc/service-test Service stopped responding to GET requests over new connections
Apr 16 10:35:02.700 - 1s    E ns/e2e-k8s-service-lb-available-4722 svc/service-test Service is not responding to GET requests over new connections
Apr 16 10:35:03.782 I ns/e2e-k8s-service-lb-available-4722 svc/service-test Service started responding to GET requests over new connections
Apr 16 10:36:06.701 E ns/e2e-k8s-service-lb-available-4722 svc/service-test Service stopped responding to GET requests on reused connections
Apr 16 10:36:06.850 I ns/e2e-k8s-service-lb-available-4722 svc/service-test Service started responding to GET requests on reused connections
Apr 16 10:52:38.701 E ns/e2e-k8s-service-lb-available-4722 svc/service-test Service stopped responding to GET requests on reused connections
Apr 16 10:52:39.700 - 999ms E ns/e2e-k8s-service-lb-available-4722 svc/service-test Service is not responding to GET requests on reused connections
Apr 16 10:52:41.089 I ns/e2e-k8s-service-lb-available-4722 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1587034806.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 34m43s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 10s of 34m42s (1%):

Apr 16 10:34:51.500 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 16 10:34:52.499 - 3s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Apr 16 10:34:55.795 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 16 10:34:56.499 - 3s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Apr 16 10:34:56.800 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 16 10:35:01.117 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 16 10:36:28.500 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 16 10:36:28.725 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
				from junit_upgrade_1587034806.xml

Filter through log files


Cluster upgrade Kubernetes APIs remain available 34m43s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 21s of 34m42s (1%):

Apr 16 10:48:54.778 E kube-apiserver Kube API started failing: Get https://api.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: unexpected EOF
Apr 16 10:48:55.485 E kube-apiserver Kube API is not responding to GET requests
Apr 16 10:48:55.814 I kube-apiserver Kube API started responding to GET requests
Apr 16 10:52:30.464 E kube-apiserver Kube API started failing: etcdserver: request timed out
Apr 16 10:52:30.485 - 18s   E kube-apiserver Kube API is not responding to GET requests
Apr 16 10:52:48.945 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1587034806.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 34m43s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 28s of 34m42s (1%):

Apr 16 10:34:57.522 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 16 10:34:58.521 - 9s    E openshift-apiserver OpenShift API is not responding to GET requests
Apr 16 10:35:07.684 I openshift-apiserver OpenShift API started responding to GET requests
Apr 16 10:48:54.778 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: unexpected EOF
Apr 16 10:48:55.521 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 16 10:48:55.817 I openshift-apiserver OpenShift API started responding to GET requests
Apr 16 10:52:30.440 I openshift-apiserver OpenShift API stopped responding to GET requests: etcdserver: request timed out
Apr 16 10:52:30.521 - 18s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 16 10:52:48.946 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1587034806.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 37m17s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
214 error level events were detected during this test run:

Apr 16 10:23:15.110 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-controller-manager container exited with code 255 (Error): 07] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machine.openshift.io/v1beta1/machines?allowWatchBookmarks=true&resourceVersion=17357&timeout=9m34s&timeoutSeconds=574&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:23:14.407887       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/servicemonitors?allowWatchBookmarks=true&resourceVersion=20688&timeout=6m36s&timeoutSeconds=396&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:23:14.409438       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=17367&timeout=8m13s&timeoutSeconds=493&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:23:14.410477       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:6443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=14311&timeout=7m31s&timeoutSeconds=451&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0416 10:23:14.750254       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0416 10:23:14.750380       1 controllermanager.go:291] leaderelection lost\nI0416 10:23:14.773332       1 dynamic_serving_content.go:145] Shutting down csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key\nI0416 10:23:14.773350       1 cleaner.go:89] Shutting down CSR cleaner controller\nI0416 10:23:14.773360       1 cronjob_controller.go:101] Shutting down CronJob Manager\nI0416 10:23:14.773369       1 tokens_controller.go:188] Shutting down\n
Apr 16 10:25:50.752 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-5957db4dc7" has successfully progressed.
Apr 16 10:26:38.641 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-d86ccc9bf-rmt45 node/ip-10-0-146-104.us-west-1.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): 5       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"7fbf73f3-8ca9-4910-afc4-bc6351bf6b3c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-139-50.us-west-1.compute.internal pods/kube-controller-manager-ip-10-0-139-50.us-west-1.compute.internal container=\"kube-controller-manager\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nI0416 10:26:37.967096       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:26:37.967698       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0416 10:26:37.968597       1 base_controller.go:74] Shutting down RevisionController ...\nI0416 10:26:37.968628       1 satokensigner_controller.go:332] Shutting down SATokenSignerController\nI0416 10:26:37.968644       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0416 10:26:37.968658       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0416 10:26:37.968676       1 base_controller.go:74] Shutting down InstallerController ...\nI0416 10:26:37.968692       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0416 10:26:37.968741       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0416 10:26:37.968758       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0416 10:26:37.968774       1 base_controller.go:74] Shutting down PruneController ...\nI0416 10:26:37.968787       1 targetconfigcontroller.go:644] Shutting down TargetConfigController\nI0416 10:26:37.968814       1 base_controller.go:74] Shutting down  ...\nF0416 10:26:37.968820       1 builder.go:243] stopped\n
Apr 16 10:26:44.481 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-67c6544ff-4jk8l node/ip-10-0-131-84.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): taticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-16T10:09:10Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0416 10:20:05.771277       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"9433f15b-5a41-4c1d-8edf-aa201399213f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-139-50.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-139-50.us-west-1.compute.internal container=\"kube-scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0416 10:20:06.552170       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"9433f15b-5a41-4c1d-8edf-aa201399213f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-7 -n openshift-kube-scheduler:\ncause by changes in data.status\nI0416 10:20:07.372411       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"9433f15b-5a41-4c1d-8edf-aa201399213f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-139-50.us-west-1.compute.internal -n openshift-kube-scheduler because it was missing\nI0416 10:26:43.698320       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:26:43.698773       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0416 10:26:43.698986       1 builder.go:209] server exited\n
Apr 16 10:27:34.045 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0416 10:27:33.163556       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0416 10:27:33.174468       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0416 10:27:33.177553       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0416 10:27:33.178513       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0416 10:27:33.178803       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 16 10:27:34.227 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 16 10:27:51.279 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 16 10:28:24.408 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 16 10:28:37.603 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0416 10:28:36.763757       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0416 10:28:36.765310       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0416 10:28:36.768046       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0416 10:28:36.768200       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0416 10:28:36.769394       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 16 10:29:00.651 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0416 10:28:59.756455       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0416 10:28:59.758567       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0416 10:28:59.760237       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0416 10:28:59.760387       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0416 10:28:59.761248       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 16 10:30:10.606 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 16 10:30:15.587 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0416 10:30:14.830600       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0416 10:30:14.832644       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0416 10:30:14.834672       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0416 10:30:14.834678       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0416 10:30:14.835464       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 16 10:30:26.700 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0416 10:30:26.229212       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0416 10:30:26.230903       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0416 10:30:26.232615       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0416 10:30:26.232687       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0416 10:30:26.233818       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 16 10:30:33.734 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 16 10:30:55.836 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 16 10:31:08.663 E ns/openshift-kube-storage-version-migrator pod/migrator-59fb86b58c-jw759 node/ip-10-0-151-181.us-west-1.compute.internal container=migrator container exited with code 2 (Error): 
Apr 16 10:31:25.713 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-784665fdb8-r925j node/ip-10-0-151-181.us-west-1.compute.internal container=operator container exited with code 255 (Error): 165957ms\nI0416 10:31:13.398853       1 operator.go:145] Starting syncing operator at 2020-04-16 10:31:13.398841234 +0000 UTC m=+943.603655547\nI0416 10:31:13.437749       1 operator.go:147] Finished syncing operator at 38.90009ms\nI0416 10:31:13.524405       1 operator.go:145] Starting syncing operator at 2020-04-16 10:31:13.524396678 +0000 UTC m=+943.729210949\nI0416 10:31:13.591684       1 operator.go:147] Finished syncing operator at 67.280498ms\nI0416 10:31:17.966068       1 operator.go:145] Starting syncing operator at 2020-04-16 10:31:17.966057218 +0000 UTC m=+948.170871513\nI0416 10:31:18.012577       1 operator.go:147] Finished syncing operator at 46.511641ms\nI0416 10:31:18.448930       1 operator.go:145] Starting syncing operator at 2020-04-16 10:31:18.448919775 +0000 UTC m=+948.653734066\nI0416 10:31:18.494472       1 operator.go:147] Finished syncing operator at 45.544173ms\nI0416 10:31:18.534151       1 operator.go:145] Starting syncing operator at 2020-04-16 10:31:18.534141014 +0000 UTC m=+948.738955487\nI0416 10:31:18.573502       1 operator.go:147] Finished syncing operator at 39.335112ms\nI0416 10:31:18.989850       1 operator.go:145] Starting syncing operator at 2020-04-16 10:31:18.98983875 +0000 UTC m=+949.194653033\nI0416 10:31:19.049719       1 operator.go:147] Finished syncing operator at 59.873103ms\nI0416 10:31:24.756920       1 operator.go:145] Starting syncing operator at 2020-04-16 10:31:24.756912004 +0000 UTC m=+954.961726273\nI0416 10:31:24.795989       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:31:24.796292       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0416 10:31:24.796565       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0416 10:31:24.796591       1 logging_controller.go:93] Shutting down LogLevelController\nI0416 10:31:24.796606       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nF0416 10:31:24.796687       1 builder.go:243] stopped\n
Apr 16 10:31:28.968 E ns/openshift-monitoring pod/node-exporter-j6h48 node/ip-10-0-131-84.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:30:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:30:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:30:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 16 10:31:30.002 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-57786f8b8d-cdl4g node/ip-10-0-131-84.us-west-1.compute.internal container=operator container exited with code 255 (Error): go:185] Listing and watching *v1.Service from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0416 10:31:16.450283       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0416 10:31:16.450299       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0416 10:31:16.450834       1 reflector.go:185] Listing and watching *v1.Secret from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0416 10:31:16.450969       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0416 10:31:16.518978       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0416 10:31:16.598251       1 request.go:565] Throttling request took 147.25772ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\nI0416 10:31:16.798214       1 request.go:565] Throttling request took 347.12559ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-config-managed/configmaps?limit=500&resourceVersion=0\nI0416 10:31:25.863182       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0416 10:31:29.061347       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:31:29.061573       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0416 10:31:29.061603       1 finalizer_controller.go:140] Shutting down FinalizerController\nI0416 10:31:29.061778       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-apiserver\nI0416 10:31:29.061808       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0416 10:31:29.061822       1 workload_controller.go:254] Shutting down OpenShiftSvCatAPIServerOperator\nF0416 10:31:29.061935       1 builder.go:243] stopped\n
Apr 16 10:31:34.060 E ns/openshift-service-ca pod/service-ca-55bc48d947-g4sdf node/ip-10-0-146-104.us-west-1.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Apr 16 10:31:34.474 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-7f9b4f6ff8-vt262 node/ip-10-0-133-187.us-west-1.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Apr 16 10:31:44.508 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* deployment openshift-authentication-operator/authentication-operator is progressing ReplicaSetUpdated: ReplicaSet "authentication-operator-77868bc455" is progressing.\n* deployment openshift-cluster-samples-operator/cluster-samples-operator is progressing ReplicaSetUpdated: ReplicaSet "cluster-samples-operator-747d87d48" is progressing.\n* deployment openshift-console/downloads is progressing ReplicaSetUpdated: ReplicaSet "downloads-679d7b555f" is progressing.\n* deployment openshift-controller-manager-operator/openshift-controller-manager-operator is progressing ReplicaSetUpdated: ReplicaSet "openshift-controller-manager-operator-d87d6b67b" is progressing.\n* deployment openshift-image-registry/cluster-image-registry-operator is progressing ReplicaSetUpdated: ReplicaSet "cluster-image-registry-operator-68596bdccd" is progressing.\n* deployment openshift-marketplace/marketplace-operator is progressing ReplicaSetUpdated: ReplicaSet "marketplace-operator-6fd87f4d9b" is progressing.\n* deployment openshift-operator-lifecycle-manager/olm-operator is progressing ReplicaSetUpdated: ReplicaSet "olm-operator-868b4787cb" is progressing.\n* deployment openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator is progressing ReplicaSetUpdated: ReplicaSet "openshift-service-catalog-controller-manager-operator-5f8cbf95f9" is progressing.
Apr 16 10:31:46.781 E ns/openshift-monitoring pod/prometheus-adapter-99c7c6884-mkshw node/ip-10-0-151-181.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0416 10:21:14.924265       1 adapter.go:93] successfully using in-cluster auth\nI0416 10:21:15.696142       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 16 10:31:51.132 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-6f8cd9c46c-5cz8q node/ip-10-0-131-84.us-west-1.compute.internal container=operator container exited with code 255 (Error): er diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-16T10:09:08Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-16T10:31:32Z","message":"Progressing: daemonset/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.","reason":"_DesiredStateNotYetAchieved","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-04-16T10:13:32Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-16T10:09:08Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0416 10:31:32.280206       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"3c89de34-93f7-4672-bdc6-30ce47f42591", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: daemonset/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.")\nI0416 10:31:40.499116       1 httplog.go:90] GET /metrics: (29.931474ms) 200 [Prometheus/2.15.2 10.129.2.13:56304]\nI0416 10:31:49.384832       1 httplog.go:90] GET /metrics: (22.008865ms) 200 [Prometheus/2.15.2 10.131.0.14:35522]\nI0416 10:31:49.998877       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:31:50.000176       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0416 10:31:50.000270       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0416 10:31:50.000451       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nF0416 10:31:50.000526       1 builder.go:243] stopped\n
Apr 16 10:31:52.516 E ns/openshift-monitoring pod/prometheus-adapter-99c7c6884-bpqwt node/ip-10-0-133-187.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0416 10:21:14.469617       1 adapter.go:93] successfully using in-cluster auth\nI0416 10:21:15.512274       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 16 10:32:00.207 E ns/openshift-monitoring pod/node-exporter-bs7jx node/ip-10-0-146-104.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 16 10:32:03.205 E ns/openshift-controller-manager pod/controller-manager-r85kt node/ip-10-0-131-84.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): I0416 10:14:17.132965       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0416 10:14:17.134598       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-kgh86rjz/stable-initial@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0416 10:14:17.134622       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-kgh86rjz/stable-initial@sha256:30512b4dcc153cda7e957155f12676842a2ac2567145242d18857e2c39b93e60"\nI0416 10:14:17.134786       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0416 10:14:17.135350       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 16 10:32:03.219 E ns/openshift-controller-manager pod/controller-manager-hhq6l node/ip-10-0-146-104.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): am: stream error: stream ID 171; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:27:54.310737       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 209; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:28:23.862456       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 199; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:28:23.863566       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 173; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:28:23.863663       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 251; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:28:23.863749       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 197; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:28:23.863854       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 229; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 16 10:32:14.635 E ns/openshift-monitoring pod/thanos-querier-f84c444d6-qbzgs node/ip-10-0-133-187.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/04/16 10:21:53 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/16 10:21:53 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/16 10:21:53 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/16 10:21:53 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/16 10:21:53 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/16 10:21:53 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/16 10:21:53 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/16 10:21:53 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0416 10:21:53.716688       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/16 10:21:53 http.go:107: HTTPS: listening on [::]:9091\n
Apr 16 10:32:15.029 E ns/openshift-monitoring pod/node-exporter-l9czs node/ip-10-0-151-181.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:22Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:32:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 16 10:32:25.713 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-133-187.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-16T10:32:03.557Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-16T10:32:03.565Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-16T10:32:03.566Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-16T10:32:03.567Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-16T10:32:03.567Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-16T10:32:03.568Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-16T10:32:03.568Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-16
Apr 16 10:32:29.710 E ns/openshift-monitoring pod/node-exporter-fxhn5 node/ip-10-0-133-187.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:31:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:32:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:32:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 16 10:32:42.448 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 16 10:32:43.122 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-151-181.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-16T10:32:38.727Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-16T10:32:38.732Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-16T10:32:38.732Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-16T10:32:38.733Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-16T10:32:38.733Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-16T10:32:38.733Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-16T10:32:38.734Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-16T10:32:38.734Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-16T10:32:38.734Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-16T10:32:38.734Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-16T10:32:38.734Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-16T10:32:38.734Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-16T10:32:38.734Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-16T10:32:38.734Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-16T10:32:38.734Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-16T10:32:38.734Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-16
Apr 16 10:32:55.306 E ns/openshift-console-operator pod/console-operator-5cd6594468-7h7x2 node/ip-10-0-139-50.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): nt decoding: unexpected EOF\nI0416 10:31:27.937942       1 log.go:172] http: TLS handshake error from 10.131.0.14:45154: remote error: tls: bad certificate\nI0416 10:31:41.109013       1 log.go:172] http: TLS handshake error from 10.129.2.13:47244: remote error: tls: bad certificate\nI0416 10:31:57.944854       1 log.go:172] http: TLS handshake error from 10.131.0.14:45982: remote error: tls: bad certificate\nI0416 10:32:27.936406       1 log.go:172] http: TLS handshake error from 10.131.0.14:46624: remote error: tls: bad certificate\nI0416 10:32:41.110014       1 log.go:172] http: TLS handshake error from 10.128.2.27:58666: remote error: tls: bad certificate\nI0416 10:32:54.562831       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:32:54.563295       1 controller.go:70] Shutting down Console\nI0416 10:32:54.563390       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0416 10:32:54.563444       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0416 10:32:54.563498       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0416 10:32:54.563544       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0416 10:32:54.563589       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0416 10:32:54.563635       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0416 10:32:54.563679       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0416 10:32:54.563806       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0416 10:32:54.563850       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0416 10:32:54.563907       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0416 10:32:54.563948       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nF0416 10:32:54.563910       1 builder.go:210] server exited\n
Apr 16 10:33:03.614 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 16 10:33:33.757 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 16 10:34:34.838 E ns/openshift-sdn pod/sdn-controller-22ff8 node/ip-10-0-131-84.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0416 10:06:09.314512       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0416 10:12:53.234948       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 16 10:34:40.939 E ns/openshift-sdn pod/sdn-2pgrd node/ip-10-0-146-104.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ole:https to [10.128.0.76:8443 10.130.0.69:8443]\nI0416 10:34:21.956555    2062 roundrobin.go:217] Delete endpoint 10.129.0.54:8443 for service "openshift-console/console:https"\nI0416 10:34:22.100834    2062 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:34:22.100862    2062 proxier.go:347] userspace syncProxyRules took 35.216817ms\nI0416 10:34:22.246734    2062 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:34:22.246761    2062 proxier.go:347] userspace syncProxyRules took 31.622833ms\nI0416 10:34:24.456523    2062 roundrobin.go:267] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.131.84:6443 10.0.139.50:6443 10.0.146.104:6443]\nI0416 10:34:24.649857    2062 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:34:24.649888    2062 proxier.go:347] userspace syncProxyRules took 42.256274ms\nI0416 10:34:30.644017    2062 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.131.84:6443 10.0.139.50:6443 10.0.146.104:6443]\nI0416 10:34:30.864771    2062 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:34:30.864803    2062 proxier.go:347] userspace syncProxyRules took 36.412053ms\nI0416 10:34:32.014599    2062 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.18:6443 10.130.0.4:6443]\nI0416 10:34:32.014636    2062 roundrobin.go:217] Delete endpoint 10.128.0.5:6443 for service "openshift-multus/multus-admission-controller:"\nI0416 10:34:32.167540    2062 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:34:32.167569    2062 proxier.go:347] userspace syncProxyRules took 29.607522ms\nI0416 10:34:40.358416    2062 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0416 10:34:40.358606    2062 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 16 10:34:41.954 E ns/openshift-console pod/console-84769d9558-b2xx2 node/ip-10-0-146-104.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020-04-16T10:21:24Z cmd/main: cookies are secure!\n2020-04-16T10:21:24Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-16T10:21:34Z cmd/main: Binding to [::]:8443...\n2020-04-16T10:21:34Z cmd/main: using TLS\n
Apr 16 10:34:47.890 E ns/openshift-console pod/console-84769d9558-rxx7s node/ip-10-0-131-84.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020-04-16T10:21:08Z cmd/main: cookies are secure!\n2020-04-16T10:21:08Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-16T10:21:18Z cmd/main: Binding to [::]:8443...\n2020-04-16T10:21:18Z cmd/main: using TLS\n
Apr 16 10:34:49.819 E ns/openshift-sdn pod/sdn-controller-bmqwm node/ip-10-0-139-50.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): nids.go:115] Allocated netid 3243413 for namespace "openshift-console"\nI0416 10:13:51.138375       1 vnids.go:115] Allocated netid 5088122 for namespace "openshift-console-operator"\nE0416 10:16:34.403331       1 leaderelection.go:367] Failed to update lock: Put https://api-int.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: read tcp 10.0.139.50:50578->10.0.151.55:6443: read: connection reset by peer\nI0416 10:22:52.837777       1 vnids.go:115] Allocated netid 5552369 for namespace "e2e-frontend-ingress-available-7415"\nI0416 10:22:52.847910       1 vnids.go:115] Allocated netid 6073060 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-2844"\nI0416 10:22:52.856387       1 vnids.go:115] Allocated netid 4713675 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-3662"\nI0416 10:22:52.865870       1 vnids.go:115] Allocated netid 11003844 for namespace "e2e-kubernetes-api-available-672"\nI0416 10:22:52.876554       1 vnids.go:115] Allocated netid 8906398 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-1225"\nI0416 10:22:52.890245       1 vnids.go:115] Allocated netid 1964258 for namespace "e2e-openshift-api-available-5000"\nI0416 10:22:52.910215       1 vnids.go:115] Allocated netid 13407323 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-6294"\nI0416 10:22:52.929402       1 vnids.go:115] Allocated netid 1620812 for namespace "e2e-k8s-sig-apps-job-upgrade-6281"\nI0416 10:22:52.941746       1 vnids.go:115] Allocated netid 8486647 for namespace "e2e-k8s-sig-apps-deployment-upgrade-7258"\nI0416 10:22:52.960581       1 vnids.go:115] Allocated netid 14841348 for namespace "e2e-k8s-service-lb-available-4722"\nE0416 10:31:05.900944       1 leaderelection.go:367] Failed to update lock: Put https://api-int.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: read tcp 10.0.139.50:58424->10.0.151.55:6443: read: connection reset by peer\n
Apr 16 10:34:52.374 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-5cfb9d64bc-bzw8g node/ip-10-0-146-104.us-west-1.compute.internal container=manager container exited with code 1 (Error): loud-credential-operator/openshift-machine-api-ovirt\ntime="2020-04-16T10:31:41Z" level=debug msg="ignoring cr as it is for a different cloud" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-ovirt secret=openshift-machine-api/ovirt-credentials\ntime="2020-04-16T10:31:41Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-ovirt secret=openshift-machine-api/ovirt-credentials\ntime="2020-04-16T10:31:41Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-ovirt secret=openshift-machine-api/ovirt-credentials\ntime="2020-04-16T10:31:41Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-04-16T10:31:41Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-04-16T10:31:41Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-04-16T10:31:41Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-04-16T10:31:41Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-04-16T10:31:41Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-04-16T10:31:41Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\ntime="2020-04-16T10:33:41Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-04-16T10:33:41Z" level=info msg="reconcile complete" controller=metrics elapsed=1.805006ms\ntime="2020-04-16T10:34:50Z" level=fatal msg="unable to run the manager" error="leader election lost"\n
Apr 16 10:34:58.393 E ns/openshift-sdn pod/sdn-controller-tl8zx node/ip-10-0-146-104.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0416 10:06:09.413322       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0416 10:12:53.266790       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 16 10:35:02.504 E ns/openshift-multus pod/multus-4rpxc node/ip-10-0-151-181.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 16 10:35:02.913 E ns/openshift-multus pod/multus-admission-controller-jwn6n node/ip-10-0-139-50.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 16 10:35:07.541 E ns/openshift-sdn pod/sdn-scwt4 node/ip-10-0-134-80.us-west-1.compute.internal container=sdn container exited with code 255 (Error): sion-controller: to [10.129.0.18:6443 10.130.0.4:6443]\nI0416 10:34:32.014287    2282 roundrobin.go:217] Delete endpoint 10.128.0.5:6443 for service "openshift-multus/multus-admission-controller:"\nI0416 10:34:32.148965    2282 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:34:32.148990    2282 proxier.go:347] userspace syncProxyRules took 27.093956ms\nI0416 10:34:52.345749    2282 roundrobin.go:295] LoadBalancerRR: Removing endpoints for openshift-cloud-credential-operator/controller-manager-service:\nI0416 10:34:52.347109    2282 roundrobin.go:295] LoadBalancerRR: Removing endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics\nI0416 10:34:52.478595    2282 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:34:52.478617    2282 proxier.go:347] userspace syncProxyRules took 27.570573ms\nI0416 10:34:52.613156    2282 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:34:52.613183    2282 proxier.go:347] userspace syncProxyRules took 28.160013ms\nI0416 10:34:53.348048    2282 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics to [10.130.0.60:2112]\nI0416 10:34:53.348133    2282 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/controller-manager-service: to [10.130.0.60:443]\nI0416 10:34:53.492090    2282 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:34:53.492118    2282 proxier.go:347] userspace syncProxyRules took 28.095337ms\nI0416 10:34:53.621481    2282 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:34:53.621503    2282 proxier.go:347] userspace syncProxyRules took 27.549159ms\nI0416 10:35:05.805738    2282 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0416 10:35:07.164427    2282 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 16 10:35:34.684 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6b947b74f6-5g2gr node/ip-10-0-146-104.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): text canceled\nE0416 10:35:04.104643       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, context canceled]\nE0416 10:35:09.891051       1 webhook.go:109] Failed to make webhook authenticator request: Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: context canceled\nE0416 10:35:09.891097       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: context canceled]\nE0416 10:35:17.449462       1 webhook.go:109] Failed to make webhook authenticator request: Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: context canceled\nE0416 10:35:17.449505       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, Post https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenreviews: context canceled]\nI0416 10:35:17.502556       1 leaderelection.go:288] failed to renew lease openshift-apiserver-operator/openshift-apiserver-operator-lock: failed to tryAcquireOrRenew context deadline exceeded\nI0416 10:35:17.503050       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator-lock", UID:"c404f3f1-bafd-4d84-87c2-8a8563311396", APIVersion:"v1", ResourceVersion:"30028", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 519d392c-ef1a-4890-9ba9-e97982cca1ba stopped leading\nE0416 10:35:17.503358       1 leaderelection.go:331] error retrieving resource lock openshift-apiserver-operator/openshift-apiserver-operator-lock: Get https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver-operator/configmaps/openshift-apiserver-operator-lock?timeout=35s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nF0416 10:35:34.230990       1 leaderelection.go:67] leaderelection lost\nI0416 10:35:34.237991       1 migration_controller.go:327] Shutting down EncryptionMigrationController\n
Apr 16 10:35:41.721 E ns/openshift-multus pod/multus-admission-controller-n7gq5 node/ip-10-0-146-104.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 16 10:36:01.193 E ns/openshift-multus pod/multus-nlsqm node/ip-10-0-131-84.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 16 10:36:19.288 E ns/openshift-multus pod/multus-admission-controller-92jj5 node/ip-10-0-131-84.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 16 10:36:19.385 E ns/openshift-sdn pod/sdn-xw2tx node/ip-10-0-131-84.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 28 proxier.go:347] userspace syncProxyRules took 28.303269ms\nI0416 10:35:49.058735   90528 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:35:49.058762   90528 proxier.go:347] userspace syncProxyRules took 29.180061ms\nI0416 10:35:49.627095   90528 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-apiserver/api:https to [10.128.0.54:8443 10.130.0.54:8443]\nI0416 10:35:49.627149   90528 roundrobin.go:217] Delete endpoint 10.129.0.59:8443 for service "openshift-apiserver/api:https"\nI0416 10:35:49.800784   90528 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:35:49.800832   90528 proxier.go:347] userspace syncProxyRules took 29.024108ms\nI0416 10:35:49.974789   90528 pod.go:539] CNI_DEL openshift-apiserver/apiserver-5cc6b798df-982xp\nI0416 10:35:56.980863   90528 pod.go:503] CNI_ADD openshift-apiserver/apiserver-84f47f7dfb-bph8h got IP 10.129.0.73, ofport 74\nI0416 10:36:04.225414   90528 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-apiserver/api:https to [10.128.0.54:8443 10.129.0.73:8443 10.130.0.54:8443]\nI0416 10:36:04.308467   90528 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-apiserver/api:https to [10.129.0.73:8443 10.130.0.54:8443]\nI0416 10:36:04.308673   90528 roundrobin.go:217] Delete endpoint 10.128.0.54:8443 for service "openshift-apiserver/api:https"\nI0416 10:36:04.474685   90528 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:36:04.474748   90528 proxier.go:347] userspace syncProxyRules took 30.309657ms\nI0416 10:36:04.619561   90528 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:36:04.619582   90528 proxier.go:347] userspace syncProxyRules took 32.273945ms\nI0416 10:36:18.422865   90528 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0416 10:36:18.422927   90528 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 16 10:36:45.451 E ns/openshift-sdn pod/sdn-ng5xm node/ip-10-0-133-187.us-west-1.compute.internal container=sdn container exited with code 255 (Error): roxyRules took 30.134298ms\nI0416 10:36:04.509209   82636 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:36:04.509231   82636 proxier.go:347] userspace syncProxyRules took 27.74414ms\nI0416 10:36:21.612665   82636 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-apiserver/api:https to [10.128.0.78:8443 10.129.0.73:8443 10.130.0.54:8443]\nI0416 10:36:21.684668   82636 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-apiserver/api:https to [10.128.0.78:8443 10.129.0.73:8443]\nI0416 10:36:21.684713   82636 roundrobin.go:217] Delete endpoint 10.130.0.54:8443 for service "openshift-apiserver/api:https"\nI0416 10:36:21.759578   82636 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:36:21.759604   82636 proxier.go:347] userspace syncProxyRules took 30.648818ms\nI0416 10:36:21.892765   82636 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:36:21.892787   82636 proxier.go:347] userspace syncProxyRules took 27.238415ms\nI0416 10:36:27.301300   82636 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.77:6443 10.129.0.74:6443 10.130.0.71:6443]\nI0416 10:36:27.437385   82636 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:36:27.437408   82636 proxier.go:347] userspace syncProxyRules took 27.903229ms\nI0416 10:36:36.672833   82636 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-apiserver/api:https to [10.128.0.78:8443 10.129.0.73:8443 10.130.0.72:8443]\nI0416 10:36:36.827066   82636 proxier.go:368] userspace proxy: processing 0 service events\nI0416 10:36:36.827090   82636 proxier.go:347] userspace syncProxyRules took 28.405352ms\nI0416 10:36:45.031937   82636 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0416 10:36:45.031975   82636 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 16 10:36:50.031 E ns/openshift-multus pod/multus-dzdxz node/ip-10-0-146-104.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 16 10:36:58.483 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6ddd799d-swr95 node/ip-10-0-133-187.us-west-1.compute.internal container=snapshot-controller container exited with code 255 (Error): 
Apr 16 10:37:42.262 E ns/openshift-multus pod/multus-jc4hh node/ip-10-0-134-80.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 16 10:38:31.756 E ns/openshift-multus pod/multus-kv5gc node/ip-10-0-133-187.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 16 10:39:55.028 E ns/openshift-machine-config-operator pod/machine-config-operator-6897c5f4d6-jkgnz node/ip-10-0-131-84.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error):        1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfig: the server could not find the requested resource (get machineconfigs.machineconfiguration.openshift.io)\nE0416 10:09:04.972210       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0416 10:09:04.980122       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0416 10:09:06.040882       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0416 10:09:09.580408       1 sync.go:61] [init mode] synced RenderConfig in 5.628235976s\nI0416 10:09:10.220190       1 sync.go:61] [init mode] synced MachineConfigPools in 639.731397ms\nI0416 10:10:05.798023       1 sync.go:61] [init mode] synced MachineConfigDaemon in 55.577788694s\nI0416 10:10:12.059794       1 sync.go:61] [init mode] synced MachineConfigController in 6.261723364s\nI0416 10:10:21.173415       1 sync.go:61] [init mode] synced MachineConfigServer in 9.113565808s\nI0416 10:10:32.184107       1 sync.go:61] [init mode] synced RequiredPools in 11.010653214s\nI0416 10:10:32.266440       1 sync.go:85] Initialization complete\nE0416 10:12:53.243060       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Apr 16 10:42:07.917 E ns/openshift-machine-config-operator pod/machine-config-daemon-m9nf8 node/ip-10-0-139-50.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 16 10:42:21.544 E ns/openshift-machine-config-operator pod/machine-config-daemon-8x9nd node/ip-10-0-131-84.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 16 10:42:57.674 E ns/openshift-machine-config-operator pod/machine-config-controller-6b67c4b4b9-2rv72 node/ip-10-0-131-84.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): ineconfiguration.openshift.io/state = Done\nI0416 10:15:38.235425       1 node_controller.go:452] Pool worker: node ip-10-0-133-187.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-b0263c18f73929dbe8fc7c8895eef7e7\nI0416 10:15:38.235461       1 node_controller.go:452] Pool worker: node ip-10-0-133-187.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-b0263c18f73929dbe8fc7c8895eef7e7\nI0416 10:15:38.235471       1 node_controller.go:452] Pool worker: node ip-10-0-133-187.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0416 10:15:59.028632       1 node_controller.go:452] Pool worker: node ip-10-0-134-80.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-b0263c18f73929dbe8fc7c8895eef7e7\nI0416 10:15:59.028845       1 node_controller.go:452] Pool worker: node ip-10-0-134-80.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-b0263c18f73929dbe8fc7c8895eef7e7\nI0416 10:15:59.028869       1 node_controller.go:452] Pool worker: node ip-10-0-134-80.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0416 10:17:02.339963       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool master\nI0416 10:17:02.447868       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool worker\nI0416 10:20:49.815565       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool master\nI0416 10:20:49.843410       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool worker\nI0416 10:31:17.974742       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool master\nI0416 10:31:18.305238       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool worker\n
Apr 16 10:44:45.032 E ns/openshift-machine-config-operator pod/machine-config-server-c92hj node/ip-10-0-131-84.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0416 10:10:13.080207       1 start.go:38] Version: machine-config-daemon-4.4.0-202004090539-2-gaea58635-dirty (aea586355d17e7587947a798421462cfab8538f4)\nI0416 10:10:13.081382       1 api.go:51] Launching server on :22624\nI0416 10:10:13.081539       1 api.go:51] Launching server on :22623\n
Apr 16 10:44:56.327 E ns/openshift-console-operator pod/console-operator-6f76f87ff7-z9rzr node/ip-10-0-131-84.us-west-1.compute.internal container=console-operator container exited with code 255 (Error):       1 status.go:73] SyncLoopRefreshProgressing InProgress Working toward version 0.0.1-2020-04-16-095239\nE0416 10:34:17.344934       1 status.go:73] DeploymentAvailable FailedUpdate 2 replicas ready at version 0.0.1-2020-04-16-095239\nI0416 10:34:22.054693       1 status_controller.go:176] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-16T10:14:11Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-16T10:34:22Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-16T10:34:22Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-16T10:14:12Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0416 10:34:22.065598       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"715c5ee4-ac43-419a-ad96-3bbac25bc15b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False (""),Available changed from False to True ("")\nW0416 10:35:49.596938       1 reflector.go:326] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 117; INTERNAL_ERROR") has prevented the request from succeeding\nI0416 10:44:55.160131       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:44:55.160751       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0416 10:44:55.160857       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0416 10:44:55.160881       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0416 10:44:55.160898       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nF0416 10:44:55.161087       1 builder.go:243] stopped\n
Apr 16 10:44:57.645 E ns/openshift-machine-config-operator pod/machine-config-server-knztk node/ip-10-0-146-104.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0416 10:10:16.103337       1 start.go:38] Version: machine-config-daemon-4.4.0-202004090539-2-gaea58635-dirty (aea586355d17e7587947a798421462cfab8538f4)\nI0416 10:10:16.104441       1 api.go:51] Launching server on :22624\nI0416 10:10:16.104517       1 api.go:51] Launching server on :22623\nI0416 10:11:07.392749       1 api.go:97] Pool worker requested by 10.0.133.202:6541\n
Apr 16 10:44:58.186 E ns/openshift-machine-config-operator pod/machine-config-controller-b4666b74b-xkdvf node/ip-10-0-131-84.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): tion.openshift.io/v1  } {MachineConfig  99-worker-82b235db-75d9-4a0e-99e9-24652379a068-registries  machineconfiguration.openshift.io/v1  } {MachineConfig  99-worker-ssh  machineconfiguration.openshift.io/v1  }]\nI0416 10:44:47.743928       1 render_controller.go:516] Pool worker: now targeting: rendered-worker-325e3c1dce12b375552cad19979aeb2c\nI0416 10:44:47.744059       1 render_controller.go:516] Pool master: now targeting: rendered-master-2d2632f2f49ac51680b433fdbbc7117e\nI0416 10:44:52.744619       1 node_controller.go:758] Setting node ip-10-0-131-84.us-west-1.compute.internal to desired config rendered-master-2d2632f2f49ac51680b433fdbbc7117e\nI0416 10:44:52.745106       1 node_controller.go:758] Setting node ip-10-0-151-181.us-west-1.compute.internal to desired config rendered-worker-325e3c1dce12b375552cad19979aeb2c\nI0416 10:44:52.776641       1 node_controller.go:452] Pool master: node ip-10-0-131-84.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-2d2632f2f49ac51680b433fdbbc7117e\nI0416 10:44:52.784822       1 node_controller.go:452] Pool worker: node ip-10-0-151-181.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-325e3c1dce12b375552cad19979aeb2c\nI0416 10:44:53.791158       1 node_controller.go:452] Pool master: node ip-10-0-131-84.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0416 10:44:53.803411       1 node_controller.go:452] Pool worker: node ip-10-0-151-181.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0416 10:44:53.813080       1 node_controller.go:433] Pool master: node ip-10-0-131-84.us-west-1.compute.internal is now reporting unready: node ip-10-0-131-84.us-west-1.compute.internal is reporting Unschedulable\nI0416 10:44:53.826848       1 node_controller.go:433] Pool worker: node ip-10-0-151-181.us-west-1.compute.internal is now reporting unready: node ip-10-0-151-181.us-west-1.compute.internal is reporting Unschedulable\n
Apr 16 10:45:01.824 E ns/openshift-machine-config-operator pod/machine-config-server-zjbxm node/ip-10-0-139-50.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0416 10:10:20.358780       1 start.go:38] Version: machine-config-daemon-4.4.0-202004090539-2-gaea58635-dirty (aea586355d17e7587947a798421462cfab8538f4)\nI0416 10:10:20.359599       1 api.go:51] Launching server on :22624\nI0416 10:10:20.359704       1 api.go:51] Launching server on :22623\nI0416 10:11:07.371398       1 api.go:97] Pool worker requested by 10.0.151.55:43975\nI0416 10:11:10.409043       1 api.go:97] Pool worker requested by 10.0.133.202:37733\n
Apr 16 10:45:14.246 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-134-80.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-16T10:45:09.673Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-16T10:45:09.679Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-16T10:45:09.679Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-16T10:45:09.680Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-16T10:45:09.680Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-16T10:45:09.680Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-16T10:45:09.680Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-16T10:45:09.680Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-16T10:45:09.680Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-16T10:45:09.680Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-16T10:45:09.680Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-16T10:45:09.680Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-16T10:45:09.680Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-16T10:45:09.680Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-16T10:45:09.681Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-16T10:45:09.681Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-16
Apr 16 10:47:13.673 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Apr 16 10:47:38.822 E ns/openshift-cluster-node-tuning-operator pod/tuned-qg4cx node/ip-10-0-151-181.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 020-04-16 10:31:51,295 INFO     tuned.daemon.daemon: starting tuning\n2020-04-16 10:31:51,308 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-16 10:31:51,308 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-16 10:31:51,312 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-16 10:31:51,313 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-16 10:31:51,315 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-16 10:31:51,436 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-16 10:31:51,457 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0416 10:33:47.673461   47766 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0416 10:33:47.673479   47766 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0416 10:33:48.767657   47766 tuned.go:554] tuned "rendered" changed\nI0416 10:33:48.767682   47766 tuned.go:224] extracting tuned profiles\nI0416 10:33:48.767692   47766 tuned.go:417] getting recommended profile...\nI0416 10:33:48.885590   47766 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0416 10:45:27.972200   47766 tuned.go:554] tuned "rendered" changed\nI0416 10:45:27.972232   47766 tuned.go:224] extracting tuned profiles\nI0416 10:45:27.972241   47766 tuned.go:417] getting recommended profile...\nI0416 10:45:27.999551   47766 tuned.go:513] profile "ip-10-0-151-181.us-west-1.compute.internal" changed, tuned profile requested: openshift-node\nI0416 10:45:28.008284   47766 tuned.go:417] getting recommended profile...\nI0416 10:45:28.095097   47766 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0416 10:45:28.129500   47766 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\n
Apr 16 10:47:38.883 E ns/openshift-monitoring pod/node-exporter-hcm5r node/ip-10-0-151-181.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:44:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:45:05Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:45:20Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:45:28Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:45:35Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:45:43Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:45:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 16 10:47:38.935 E ns/openshift-multus pod/multus-cx482 node/ip-10-0-151-181.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 16 10:47:39.021 E ns/openshift-machine-config-operator pod/machine-config-daemon-bnhhk node/ip-10-0-151-181.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 16 10:47:41.448 E ns/openshift-monitoring pod/node-exporter-4pb6n node/ip-10-0-131-84.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:44:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:44:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:44:39Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:44:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:44:59Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:45:14Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:45:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 16 10:47:41.517 E ns/openshift-cluster-node-tuning-operator pod/tuned-x2f72 node/ip-10-0-131-84.us-west-1.compute.internal container=tuned container exited with code 143 (Error): g recommended profile...\nI0416 10:32:41.370761   84123 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0416 10:32:41.516152   84123 tuned.go:258] recommended tuned profile openshift-control-plane content changed\nI0416 10:32:42.353920   84123 tuned.go:417] getting recommended profile...\nI0416 10:32:42.544963   84123 tuned.go:444] active profile () != recommended profile (openshift-control-plane)\nI0416 10:32:42.545074   84123 tuned.go:461] tuned daemon profiles changed, forcing tuned daemon reload\nI0416 10:32:42.545128   84123 tuned.go:310] starting tuned...\n2020-04-16 10:32:42,678 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-16 10:32:42,687 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-16 10:32:42,687 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-16 10:32:42,688 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-16 10:32:42,690 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-16 10:32:42,749 INFO     tuned.daemon.controller: starting controller\n2020-04-16 10:32:42,749 INFO     tuned.daemon.daemon: starting tuning\n2020-04-16 10:32:42,766 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-16 10:32:42,767 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-16 10:32:42,771 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-16 10:32:42,773 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-16 10:32:42,776 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-16 10:32:42,954 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-16 10:32:42,964 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Apr 16 10:47:41.517 E ns/openshift-controller-manager pod/controller-manager-ljkcm node/ip-10-0-131-84.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): I0416 10:32:19.991011       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0416 10:32:19.992354       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-kgh86rjz/stable@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0416 10:32:19.992373       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-kgh86rjz/stable@sha256:30512b4dcc153cda7e957155f12676842a2ac2567145242d18857e2c39b93e60"\nI0416 10:32:19.992441       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0416 10:32:19.992568       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 16 10:47:41.538 E ns/openshift-sdn pod/sdn-controller-gb6tf node/ip-10-0-131-84.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0416 10:34:48.449983       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 16 10:47:41.587 E ns/openshift-multus pod/multus-97vxm node/ip-10-0-131-84.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 16 10:47:41.599 E ns/openshift-multus pod/multus-admission-controller-b2rn2 node/ip-10-0-131-84.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Apr 16 10:47:41.634 E ns/openshift-machine-config-operator pod/machine-config-daemon-tntw8 node/ip-10-0-131-84.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 16 10:47:41.652 E ns/openshift-machine-config-operator pod/machine-config-server-8zmkx node/ip-10-0-131-84.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0416 10:44:56.788250       1 start.go:38] Version: machine-config-daemon-4.4.0-202004090539-2-gaea58635-dirty (aea586355d17e7587947a798421462cfab8538f4)\nI0416 10:44:56.789435       1 api.go:51] Launching server on :22624\nI0416 10:44:56.789506       1 api.go:51] Launching server on :22623\n
Apr 16 10:47:41.704 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-scheduler container exited with code 2 (Error):    1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0416 10:31:43.747944       1 leaderelection.go:331] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0416 10:31:43.748021       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0416 10:31:43.750585       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0416 10:31:43.758167       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0416 10:31:43.758234       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0416 10:31:43.758294       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0416 10:31:43.762061       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0416 10:31:43.762105       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0416 10:31:43.762157       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0416 10:31:43.762205       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0416 10:31:43.763078       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\n
Apr 16 10:47:41.704 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:45:06.897540       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:45:06.897569       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:45:08.912612       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:45:08.912694       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:45:10.925692       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:45:10.925775       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:45:12.942466       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:45:12.942499       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:45:14.951879       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:45:14.951906       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:45:16.961253       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:45:16.961283       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:45:18.971232       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:45:18.971271       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:45:20.980609       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:45:20.980637       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:45:22.989993       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:45:22.990081       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:45:24.998536       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:45:24.998569       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 16 10:47:41.752 E ns/openshift-etcd pod/etcd-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-16 10:27:22.404435 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-131-84.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-131-84.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-16 10:27:22.405501 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-16 10:27:22.406003 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-131-84.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-131-84.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-16 10:27:22.408315 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/16 10:27:22 grpc: addrConn.createTransport failed to connect to {https://10.0.131.84:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.131.84:9978: connect: connection refused". Reconnecting...\nWARNING: 2020/04/16 10:27:23 grpc: addrConn.createTransport failed to connect to {https://10.0.131.84:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.131.84:9978: connect: connection refused". Reconnecting...\n
Apr 16 10:47:41.804 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error):  dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0416 10:45:25.831015       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0416 10:45:25.831037       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0416 10:45:25.831179       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nE0416 10:45:25.870885       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0416 10:45:25.871108       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0416 10:45:25.871201       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0416 10:45:25.871375       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0416 10:45:25.871441       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0416 10:45:25.871563       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0416 10:45:25.871917       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-131-84.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0416 10:45:25.872301       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Apr 16 10:47:41.804 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): IServerToKubeletClientCert"\nI0416 10:42:39.000520       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com\nI0416 10:42:39.061796       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0416 10:45:25.807393       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:45:25.808201       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0416 10:45:25.808362       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0416 10:45:25.808430       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0416 10:45:25.808490       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0416 10:45:25.808545       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0416 10:45:25.808599       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0416 10:45:25.808649       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0416 10:45:25.808697       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0416 10:45:25.808849       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0416 10:45:25.808908       1 certrotationcontroller.go:556] Shutting down CertRotation\nI0416 10:45:25.808956       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0416 10:45:25.809000       1 cabundlesyncer.go:86] CA bundle controller shut down\n
Apr 16 10:47:41.804 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0416 10:30:09.959476       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 16 10:47:41.804 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0416 10:45:09.765883       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:45:09.766206       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0416 10:45:19.774914       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:45:19.775237       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 16 10:47:41.857 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=cluster-policy-controller container exited with code 1 (Error): I0416 10:30:47.976060       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0416 10:30:47.977592       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0416 10:30:47.979545       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0416 10:30:47.979609       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0416 10:31:22.448351       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0416 10:31:33.686609       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Apr 16 10:47:41.857 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-controller-manager container exited with code 2 (Error): -manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0416 10:31:19.483038       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0416 10:31:24.931895       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0416 10:31:29.469373       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0416 10:31:34.561761       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0416 10:31:43.582233       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0416 10:31:43.583022       1 webhook.go:109] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\nE0416 10:31:43.583111       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope]\n
Apr 16 10:47:41.857 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 0084       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:44:53.850449       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:45:02.292255       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:45:02.292566       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:45:03.872545       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:45:03.872938       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:45:12.303271       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:45:12.303644       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:45:13.886834       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:45:13.887168       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:45:22.311570       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:45:22.311945       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:45:23.896027       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:45:23.896387       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 16 10:47:41.857 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): W0416 10:30:15.303239       1 cmd.go:200] Using insecure, self-signed certificates\nI0416 10:30:15.303470       1 crypto.go:588] Generating new CA for cert-recovery-controller-signer@1587033015 cert, and key in /tmp/serving-cert-528779835/serving-signer.crt, /tmp/serving-cert-528779835/serving-signer.key\nI0416 10:30:16.665238       1 observer_polling.go:155] Starting file observer\nI0416 10:30:16.692803       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cert-recovery-controller-lock...\nE0416 10:31:34.181544       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nI0416 10:45:25.832202       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0416 10:45:25.832239       1 leaderelection.go:67] leaderelection lost\n
Apr 16 10:47:42.776 E ns/openshift-multus pod/multus-cx482 node/ip-10-0-151-181.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:47:45.540 E ns/openshift-multus pod/multus-cx482 node/ip-10-0-151-181.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:47:48.035 E ns/openshift-multus pod/multus-97vxm node/ip-10-0-131-84.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:47:48.525 E ns/openshift-multus pod/multus-cx482 node/ip-10-0-151-181.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:47:48.539 E ns/openshift-machine-config-operator pod/machine-config-daemon-bnhhk node/ip-10-0-151-181.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 16 10:47:53.277 E ns/openshift-machine-config-operator pod/machine-config-daemon-tntw8 node/ip-10-0-131-84.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 16 10:47:54.281 E ns/openshift-multus pod/multus-97vxm node/ip-10-0-131-84.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:47:56.271 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-84.us-west-1.compute.internal" not ready since 2020-04-16 10:47:41 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nEtcdMembersDegraded: ip-10-0-131-84.us-west-1.compute.internal members are unhealthy,  members are unknown
Apr 16 10:47:56.282 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-84.us-west-1.compute.internal" not ready since 2020-04-16 10:47:41 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 16 10:47:56.288 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-84.us-west-1.compute.internal" not ready since 2020-04-16 10:47:41 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 16 10:47:56.293 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-84.us-west-1.compute.internal" not ready since 2020-04-16 10:47:41 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 16 10:48:06.272 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-133-187.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-16T10:32:03.557Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-16T10:32:03.565Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-16T10:32:03.566Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-16T10:32:03.567Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-16T10:32:03.567Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-16T10:32:03.567Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-16T10:32:03.568Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-16T10:32:03.568Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-16
Apr 16 10:48:06.272 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-133-187.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-16T10:32:08.507490705Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-04-16T10:32:08.507636852Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-04-16T10:32:08.50968626Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-04-16T10:32:13.509286596Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-04-16T10:32:18.509198848Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-04-16T10:32:23.509209222Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-16T10:32:28.70576265Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 16 10:48:11.781 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Apr 16 10:48:19.492 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Apr 16 10:48:21.925 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-d87d6b67b-qpgtk node/ip-10-0-139-50.us-west-1.compute.internal container=operator container exited with code 255 (Error): 0.129.2.31:52174]\nI0416 10:47:56.566946       1 request.go:565] Throttling request took 143.9177ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0416 10:47:56.766932       1 request.go:565] Throttling request took 196.005227ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0416 10:48:02.059571       1 httplog.go:90] GET /metrics: (7.362882ms) 200 [Prometheus/2.15.2 10.128.2.27:56614]\nI0416 10:48:06.388247       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.Image total 0 items received\nI0416 10:48:12.258126       1 httplog.go:90] GET /metrics: (6.598201ms) 200 [Prometheus/2.15.2 10.129.2.31:52174]\nI0416 10:48:16.574216       1 request.go:565] Throttling request took 123.886808ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0416 10:48:16.774226       1 request.go:565] Throttling request took 197.475113ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0416 10:48:18.431002       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.ConfigMap total 42 items received\nI0416 10:48:18.434004       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:48:18.434557       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0416 10:48:18.436384       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0416 10:48:18.436494       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0416 10:48:18.436577       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nF0416 10:48:18.436932       1 builder.go:243] stopped\n
Apr 16 10:48:22.066 E ns/openshift-console-operator pod/console-operator-6f76f87ff7-9c7p9 node/ip-10-0-139-50.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): uring watch stream event decoding: unexpected EOF\nI0416 10:45:26.056911       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0416 10:45:26.057342       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0416 10:45:26.057663       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0416 10:45:26.058453       1 reflector.go:307] github.com/openshift/client-go/console/informers/externalversions/factory.go:101: Failed to watch *v1.ConsoleCLIDownload: Get https://172.30.0.1:443/apis/console.openshift.io/v1/consoleclidownloads?allowWatchBookmarks=true&resourceVersion=29734&timeout=6m47s&timeoutSeconds=407&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0416 10:45:26.060058       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=35252&timeout=6m20s&timeoutSeconds=380&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0416 10:48:19.252260       1 reflector.go:326] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 13; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:48:19.252463       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 9; INTERNAL_ERROR") has prevented the request from succeeding\nI0416 10:48:19.334754       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:48:19.343102       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0416 10:48:19.343212       1 builder.go:210] server exited\n
Apr 16 10:48:24.300 E ns/openshift-authentication-operator pod/authentication-operator-77868bc455-f29w6 node/ip-10-0-139-50.us-west-1.compute.internal container=operator container exited with code 255 (Error):      1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"e8862d85-17f8-4a6b-bcf0-24a8778af93e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownEndpointDegraded: failed to GET well-known https://10.0.131.84:6443/.well-known/oauth-authorization-server: dial tcp 10.0.131.84:6443: connect: connection refused" to ""\nW0416 10:48:19.248971       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 81; INTERNAL_ERROR") has prevented the request from succeeding\nI0416 10:48:19.342832       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:48:19.347048       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0416 10:48:19.347263       1 controller.go:215] Shutting down RouterCertsDomainValidationController\nI0416 10:48:19.347350       1 controller.go:70] Shutting down AuthenticationOperator2\nI0416 10:48:19.347403       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0416 10:48:19.347446       1 ingress_state_controller.go:157] Shutting down IngressStateController\nI0416 10:48:19.349220       1 logging_controller.go:93] Shutting down LogLevelController\nI0416 10:48:19.349277       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nI0416 10:48:19.349350       1 management_state_controller.go:112] Shutting down management-state-controller-authentication\nI0416 10:48:19.349396       1 status_controller.go:212] Shutting down StatusSyncer-authentication\nI0416 10:48:19.349442       1 unsupportedconfigoverrides_controller.go:162] Shutting down UnsupportedConfigOverridesController\nF0416 10:48:19.349591       1 builder.go:243] stopped\n
Apr 16 10:48:24.984 E ns/openshift-etcd-operator pod/etcd-operator-7b57696fd-jkt8l node/ip-10-0-139-50.us-west-1.compute.internal container=operator container exited with code 255 (Error): troller.go:212] Shutting down StatusSyncer-etcd\nI0416 10:48:19.848920       1 etcdcertsignercontroller.go:118] Shutting down EtcdCertSignerController\nI0416 10:48:19.848937       1 bootstrap_teardown_controller.go:226] Shutting down BootstrapTeardownController\nI0416 10:48:19.848963       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0416 10:48:19.849044       1 etcdmemberipmigrator.go:299] Shutting down EtcdMemberIPMigrator\nI0416 10:48:19.849062       1 clustermembercontroller.go:99] Shutting down ClusterMemberController\nI0416 10:48:19.849086       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0416 10:48:19.855549       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0416 10:48:19.855571       1 base_controller.go:39] All RevisionController workers have been terminated\nI0416 10:48:19.855593       1 base_controller.go:49] Shutting down worker of InstallerStateController controller ...\nI0416 10:48:19.855602       1 base_controller.go:39] All InstallerStateController workers have been terminated\nI0416 10:48:19.855620       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0416 10:48:19.855636       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0416 10:48:19.855662       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0416 10:48:19.855671       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0416 10:48:19.855693       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0416 10:48:19.855708       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0416 10:48:19.855919       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0416 10:48:19.856035       1 etcdmemberscontroller.go:192] Shutting down EtcdMembersController\nF0416 10:48:19.856067       1 builder.go:243] stopped\n
Apr 16 10:48:28.249 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-88475db7-j7xrf node/ip-10-0-139-50.us-west-1.compute.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): or-operator", Name:"kube-storage-version-migrator-operator", UID:"cb16905b-2a0f-4bbf-9ebc-c299152d1309", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from True to False ("Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available")\nI0416 10:45:06.697819       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"cb16905b-2a0f-4bbf-9ebc-c299152d1309", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0416 10:48:06.084738       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"cb16905b-2a0f-4bbf-9ebc-c299152d1309", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from True to False ("Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available")\nI0416 10:48:09.927653       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"cb16905b-2a0f-4bbf-9ebc-c299152d1309", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0416 10:48:23.861665       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0416 10:48:23.861786       1 leaderelection.go:66] leaderelection lost\n
Apr 16 10:48:29.796 E ns/openshift-machine-api pod/machine-api-operator-55f779b9f9-r2tj9 node/ip-10-0-139-50.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 16 10:48:29.859 E ns/openshift-machine-api pod/machine-api-controllers-6565974f46-l46ct node/ip-10-0-139-50.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 16 10:48:29.934 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-6548789f47-6sv5n node/ip-10-0-139-50.us-west-1.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): lse reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)")\nI0416 10:47:56.275172       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"f5144f92-7888-4905-a432-696147a4f9ff", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-131-84.us-west-1.compute.internal\" not ready since 2020-04-16 10:47:41 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)")\nI0416 10:48:11.522322       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"f5144f92-7888-4905-a432-696147a4f9ff", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready")\nI0416 10:48:11.548163       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"f5144f92-7888-4905-a432-696147a4f9ff", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready")\nI0416 10:48:27.520833       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:48:27.528475       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nF0416 10:48:27.548512       1 leaderelection.go:67] leaderelection lost\n
Apr 16 10:48:52.884 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-181.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-16T10:48:50.514Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-16T10:48:50.520Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-16T10:48:50.521Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-16T10:48:50.522Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-16T10:48:50.522Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-16T10:48:50.522Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-16T10:48:50.522Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-16T10:48:50.522Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-16T10:48:50.523Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-16T10:48:50.523Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-16T10:48:50.523Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-16T10:48:50.523Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-16T10:48:50.523Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-16T10:48:50.523Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-16T10:48:50.524Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-16T10:48:50.524Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-16
Apr 16 10:48:54.780 E kube-apiserver Kube API started failing: Get https://api.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Apr 16 10:48:57.180 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Apr 16 10:49:05.856 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-565d6b948-mtqqs node/ip-10-0-131-84.us-west-1.compute.internal container=cluster-storage-operator container exited with code 1 (Error): {"level":"info","ts":1587034144.8939962,"logger":"cmd","msg":"Go Version: go1.10.8"}\n{"level":"info","ts":1587034144.8942683,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}\n{"level":"info","ts":1587034144.894327,"logger":"cmd","msg":"Version of operator-sdk: v0.4.0"}\n{"level":"info","ts":1587034144.8957438,"logger":"leader","msg":"Trying to become the leader."}\n{"level":"error","ts":1587034144.9100552,"logger":"cmd","msg":"","error":"Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused","stacktrace":"github.com/openshift/cluster-storage-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-storage-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nmain.main\n\t/go/src/github.com/openshift/cluster-storage-operator/cmd/manager/main.go:53\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:198"}\n
Apr 16 10:49:06.950 E ns/openshift-monitoring pod/cluster-monitoring-operator-6d67bd4bf-whfnj node/ip-10-0-131-84.us-west-1.compute.internal container=cluster-monitoring-operator container exited with code 1 (Error): W0416 10:49:05.727681       1 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\n
Apr 16 10:49:07.059 E ns/openshift-machine-api pod/cluster-autoscaler-operator-9fdcccfb4-9wxll node/ip-10-0-131-84.us-west-1.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): I0416 10:49:06.103108       1 main.go:13] Go Version: go1.12.16\nI0416 10:49:06.103364       1 main.go:14] Go OS/Arch: linux/amd64\nI0416 10:49:06.103410       1 main.go:15] Version: cluster-autoscaler-operator v0.0.0-242-g8277beb-dirty\nF0416 10:49:06.106509       1 main.go:33] Failed to create operator: failed to create manager: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 16 10:49:09.499 E ns/openshift-machine-api pod/machine-api-controllers-6565974f46-ggssn node/ip-10-0-131-84.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Apr 16 10:49:09.499 E ns/openshift-machine-api pod/machine-api-controllers-6565974f46-ggssn node/ip-10-0-131-84.us-west-1.compute.internal container=nodelink-controller container exited with code 255 (Error): 
Apr 16 10:49:09.499 E ns/openshift-machine-api pod/machine-api-controllers-6565974f46-ggssn node/ip-10-0-131-84.us-west-1.compute.internal container=machine-controller container exited with code 255 (Error): 
Apr 16 10:50:10.213 E ns/openshift-marketplace pod/redhat-operators-857995d98-2w7qm node/ip-10-0-151-181.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 16 10:50:39.026 E ns/openshift-cluster-node-tuning-operator pod/tuned-rgfdm node/ip-10-0-133-187.us-west-1.compute.internal container=tuned container exited with code 143 (Error): on: static tuning from profile 'openshift-node' applied\nI0416 10:45:26.018864   70113 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0416 10:45:26.019350   70113 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0416 10:45:26.021749   70113 reflector.go:340] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:605: watch of *v1.Tuned ended with: very short watch: github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:605: Unexpected watch close - watch lasted less than a second and no items received\nW0416 10:45:26.021830   70113 reflector.go:340] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:601: watch of *v1.Profile ended with: very short watch: github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:601: Unexpected watch close - watch lasted less than a second and no items received\nE0416 10:45:27.022785   70113 reflector.go:156] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:601: Failed to list *v1.Profile: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/profiles?fieldSelector=metadata.name%3Dip-10-0-133-187.us-west-1.compute.internal&limit=500&resourceVersion=28058: dial tcp 172.30.0.1:443: connect: connection refused\nI0416 10:45:27.039339   70113 tuned.go:554] tuned "rendered" changed\nI0416 10:45:27.039365   70113 tuned.go:224] extracting tuned profiles\nI0416 10:45:27.039374   70113 tuned.go:417] getting recommended profile...\nI0416 10:45:27.177495   70113 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0416 10:45:28.029845   70113 tuned.go:513] profile "ip-10-0-133-187.us-west-1.compute.internal" changed, tuned profile requested: openshift-node\nI0416 10:45:28.466973   70113 tuned.go:417] getting recommended profile...\nI0416 10:45:28.578589   70113 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\n
Apr 16 10:50:39.044 E ns/openshift-monitoring pod/node-exporter-q489w node/ip-10-0-133-187.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:47:43Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:47:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:47:58Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:48:04Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:48:19Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:48:34Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:48:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 16 10:50:39.101 E ns/openshift-multus pod/multus-4bctd node/ip-10-0-133-187.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 16 10:50:39.142 E ns/openshift-machine-config-operator pod/machine-config-daemon-657w6 node/ip-10-0-133-187.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 16 10:50:42.946 E ns/openshift-multus pod/multus-4bctd node/ip-10-0-133-187.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:50:48.718 E ns/openshift-machine-config-operator pod/machine-config-daemon-657w6 node/ip-10-0-133-187.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 16 10:50:48.757 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-139-50.us-west-1.compute.internal" not ready since 2020-04-16 10:49:56 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-139-50.us-west-1.compute.internal members are unhealthy,  members are unknown
Apr 16 10:51:07.244 E ns/openshift-monitoring pod/openshift-state-metrics-84f56bbffc-sqvwp node/ip-10-0-134-80.us-west-1.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Apr 16 10:51:07.278 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-134-80.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/04/16 10:32:02 Watching directory: "/etc/alertmanager/config"\n
Apr 16 10:51:07.278 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-134-80.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/04/16 10:32:05 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/16 10:32:05 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/16 10:32:05 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/16 10:32:05 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/16 10:32:05 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/16 10:32:05 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/16 10:32:05 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/16 10:32:05 http.go:107: HTTPS: listening on [::]:9095\nI0416 10:32:05.933426       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/16 10:35:45 reverseproxy.go:437: http: proxy error: context canceled\n
Apr 16 10:51:07.303 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-134-80.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/04/16 10:45:07 Watching directory: "/etc/alertmanager/config"\n
Apr 16 10:51:07.303 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-134-80.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/04/16 10:45:07 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/16 10:45:07 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/16 10:45:07 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/16 10:45:07 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/16 10:45:07 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/16 10:45:07 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/16 10:45:07 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0416 10:45:07.721755       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/16 10:45:07 http.go:107: HTTPS: listening on [::]:9095\n
Apr 16 10:51:07.349 E ns/openshift-monitoring pod/thanos-querier-6f5cdd599d-mtvrc node/ip-10-0-134-80.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/04/16 10:32:05 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/16 10:32:05 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/16 10:32:05 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/16 10:32:05 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/16 10:32:05 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/16 10:32:05 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/16 10:32:05 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/16 10:32:05 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/16 10:32:05 http.go:107: HTTPS: listening on [::]:9091\nI0416 10:32:05.958847       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/16 10:44:59 server.go:3055: http: TLS handshake error from 10.128.2.28:55724: read tcp 10.129.2.23:9091->10.128.2.28:55724: read: connection reset by peer\n
Apr 16 10:51:07.370 E ns/openshift-monitoring pod/kube-state-metrics-665c65d965-tlnvn node/ip-10-0-134-80.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 16 10:51:08.804 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-scheduler container exited with code 2 (Error):      1 eventhandlers.go:242] scheduler cache UpdatePod failed: pod 074a3893-196b-422a-b21c-40c96feb635d is not added to scheduler cache, so cannot be updated\nI0416 10:48:34.501753       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-84f47f7dfb-mqw7m: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0416 10:48:42.459439       1 scheduler.go:751] pod openshift-operator-lifecycle-manager/packageserver-7d486f686f-twfx7 is bound successfully on node "ip-10-0-131-84.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0416 10:48:43.113040       1 scheduler.go:751] pod openshift-authentication/oauth-openshift-5d464df594-5swvs is bound successfully on node "ip-10-0-131-84.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0416 10:48:43.507251       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-84f47f7dfb-mqw7m: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0416 10:48:50.127900       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-58c94c75cb-cg8qv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0416 10:48:51.508832       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-58c94c75cb-cg8qv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0416 10:48:54.531346       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-84f47f7dfb-mqw7m: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Apr 16 10:51:08.804 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): certsync_controller.go:65] Syncing configmaps: []\nI0416 10:48:40.566719       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:48:42.602735       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:48:42.602759       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:48:44.614583       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:48:44.614719       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:48:46.625837       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:48:46.625863       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:48:48.633511       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:48:48.633539       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:48:50.643839       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:48:50.643984       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:48:52.654507       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:48:52.654764       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nE0416 10:48:54.863153       1 reflector.go:307] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?allowWatchBookmarks=true&resourceVersion=33699&timeout=5m46s&timeoutSeconds=346&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:48:54.863420       1 reflector.go:307] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?allowWatchBookmarks=true&resourceVersion=39252&timeout=9m37s&timeoutSeconds=577&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Apr 16 10:51:08.824 E ns/openshift-etcd pod/etcd-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-16 10:26:48.409573 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-139-50.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-139-50.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-16 10:26:48.410433 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-16 10:26:48.410856 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-139-50.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-139-50.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/16 10:26:48 grpc: addrConn.createTransport failed to connect to {https://10.0.139.50:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.139.50:9978: connect: connection refused". Reconnecting...\n2020-04-16 10:26:48.413030 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/16 10:26:49 grpc: addrConn.createTransport failed to connect to {https://10.0.139.50:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.139.50:9978: connect: connection refused". Reconnecting...\n
Apr 16 10:51:08.881 E ns/openshift-machine-config-operator pod/machine-config-server-v5xcq node/ip-10-0-139-50.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0416 10:45:14.580779       1 start.go:38] Version: machine-config-daemon-4.4.0-202004090539-2-gaea58635-dirty (aea586355d17e7587947a798421462cfab8538f4)\nI0416 10:45:14.582583       1 api.go:51] Launching server on :22624\nI0416 10:45:14.582628       1 api.go:51] Launching server on :22623\n
Apr 16 10:51:08.897 E ns/openshift-controller-manager pod/controller-manager-5tdbs node/ip-10-0-139-50.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): I0416 10:32:19.518048       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0416 10:32:19.521334       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-kgh86rjz/stable@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0416 10:32:19.521376       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-kgh86rjz/stable@sha256:30512b4dcc153cda7e957155f12676842a2ac2567145242d18857e2c39b93e60"\nI0416 10:32:19.521497       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0416 10:32:19.521603       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 16 10:51:08.912 E ns/openshift-monitoring pod/node-exporter-6cd9f node/ip-10-0-139-50.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:47:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:47:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:47:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:47:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:48:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:48:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:48:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 16 10:51:08.942 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error):  failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0416 10:48:54.600887       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0416 10:48:54.600897       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0416 10:48:54.600926       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0416 10:48:54.600929       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0416 10:48:54.600962       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0416 10:48:54.600971       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0416 10:48:54.600995       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\n
Apr 16 10:51:08.942 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0416 10:27:33.687977       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 16 10:51:08.942 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0416 10:48:42.089843       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:48:42.090246       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0416 10:48:52.101353       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:48:52.102402       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 16 10:51:08.942 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0416 10:27:33.344161       1 cmd.go:200] Using insecure, self-signed certificates\nI0416 10:27:33.344537       1 crypto.go:588] Generating new CA for cert-regeneration-controller-signer@1587032853 cert, and key in /tmp/serving-cert-118553641/serving-signer.crt, /tmp/serving-cert-118553641/serving-signer.key\nI0416 10:27:33.871179       1 observer_polling.go:155] Starting file observer\nI0416 10:27:33.902597       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nE0416 10:28:53.846811       1 leaderelection.go:331] error retrieving resource lock openshift-kube-apiserver/cert-regeneration-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nE0416 10:29:05.162092       1 leaderelection.go:331] error retrieving resource lock openshift-kube-apiserver/cert-regeneration-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nI0416 10:48:54.472335       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0416 10:48:54.472371       1 leaderelection.go:67] leaderelection lost\n
Apr 16 10:51:08.962 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=cluster-policy-controller container exited with code 1 (Error): &watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:48:54.772998       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://localhost:6443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=27052&timeout=6m34s&timeoutSeconds=394&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:48:54.773084       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: Get https://localhost:6443/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=37927&timeout=8m21s&timeoutSeconds=501&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:48:54.773601       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=24695&timeout=6m54s&timeoutSeconds=414&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:48:54.773858       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://localhost:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=24695&timeout=5m41s&timeoutSeconds=341&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:48:54.775536       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=37390&timeout=6m17s&timeoutSeconds=377&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:48:54.775937       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://localhost:6443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=34628&timeout=5m27s&timeoutSeconds=327&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Apr 16 10:51:08.962 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 3257       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:48:24.113946       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:48:32.580657       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:48:32.580974       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:48:34.131395       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:48:34.131758       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:48:42.594965       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:48:42.595482       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:48:44.140500       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:48:44.140826       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:48:52.604833       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:48:52.605227       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:48:54.149718       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:48:54.150128       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 16 10:51:08.962 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-controller-manager container exited with code 2 (Error): cluster nodes. New node set: map[ip-10-0-131-84.us-west-1.compute.internal:{} ip-10-0-134-80.us-west-1.compute.internal:{} ip-10-0-146-104.us-west-1.compute.internal:{} ip-10-0-151-181.us-west-1.compute.internal:{}]\nI0416 10:48:47.390052       1 aws_loadbalancer.go:1375] Instances added to load-balancer a6e2775ce26624f69a665bac579a7ed1\nI0416 10:48:47.406645       1 aws_loadbalancer.go:1386] Instances removed from load-balancer a6e2775ce26624f69a665bac579a7ed1\nI0416 10:48:47.748524       1 event.go:281] Event(v1.ObjectReference{Kind:"Service", Namespace:"openshift-ingress", Name:"router-default", UID:"6e2775ce-2662-4f69-a665-bac579a7ed1f", APIVersion:"v1", ResourceVersion:"11619", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\nI0416 10:48:47.786907       1 aws_loadbalancer.go:1375] Instances added to load-balancer a51e1c972daff49e8b7693b8b79feccd\nI0416 10:48:47.803048       1 aws_loadbalancer.go:1386] Instances removed from load-balancer a51e1c972daff49e8b7693b8b79feccd\nI0416 10:48:48.123868       1 controller.go:669] Successfully updated 2 out of 2 load balancers to direct traffic to the updated set of nodes\nI0416 10:48:48.124059       1 event.go:281] Event(v1.ObjectReference{Kind:"Service", Namespace:"e2e-k8s-service-lb-available-4722", Name:"service-test", UID:"51e1c972-daff-49e8-b769-3b8b79feccd4", APIVersion:"v1", ResourceVersion:"21242", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\nI0416 10:48:50.099907       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-machine-config-operator/etcd-quorum-guard-58c94c75cb, need 3, creating 1\nI0416 10:48:50.123531       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-machine-config-operator", Name:"etcd-quorum-guard-58c94c75cb", UID:"004d6909-2837-42e4-994c-089d941cd799", APIVersion:"apps/v1", ResourceVersion:"39068", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-58c94c75cb-cg8qv\n
Apr 16 10:51:08.962 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-139-50.us-west-1.compute.internal node/ip-10-0-139-50.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): W0416 10:28:37.328381       1 cmd.go:200] Using insecure, self-signed certificates\nI0416 10:28:37.328717       1 crypto.go:588] Generating new CA for cert-recovery-controller-signer@1587032917 cert, and key in /tmp/serving-cert-952186561/serving-signer.crt, /tmp/serving-cert-952186561/serving-signer.key\nI0416 10:28:38.126220       1 observer_polling.go:155] Starting file observer\nW0416 10:28:38.128923       1 builder.go:174] unable to get owner reference (falling back to namespace): Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/pods: dial tcp [::1]:6443: connect: connection refused\nI0416 10:28:38.130014       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cert-recovery-controller-lock...\nE0416 10:28:38.130814       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nE0416 10:28:53.663699       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nE0416 10:29:12.253403       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nI0416 10:48:54.552183       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0416 10:48:54.554856       1 leaderelection.go:67] leaderelection lost\n
Apr 16 10:51:09.004 E ns/openshift-sdn pod/sdn-controller-x4d28 node/ip-10-0-139-50.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0416 10:34:57.265527       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0416 10:34:57.291409       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"8e798370-b6a3-4901-b45b-6229ad6128ae", ResourceVersion:"30336", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722628365, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-139-50\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-16T10:06:05Z\",\"renewTime\":\"2020-04-16T10:34:57Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-139-50 became leader'\nI0416 10:34:57.291521       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0416 10:34:57.300061       1 master.go:51] Initializing SDN master\nI0416 10:34:57.322675       1 network_controller.go:61] Started OpenShift Network Controller\n
Apr 16 10:51:09.047 E ns/openshift-multus pod/multus-pk897 node/ip-10-0-139-50.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 16 10:51:09.068 E ns/openshift-machine-config-operator pod/machine-config-daemon-s5zdw node/ip-10-0-139-50.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 16 10:51:09.088 E ns/openshift-multus pod/multus-admission-controller-lhndg node/ip-10-0-139-50.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Apr 16 10:51:09.120 E ns/openshift-cluster-node-tuning-operator pod/tuned-bcflq node/ip-10-0-139-50.us-west-1.compute.internal container=tuned container exited with code 143 (Error): figuration.\n2020-04-16 10:32:12,898 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-16 10:32:12,898 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-16 10:32:12,969 INFO     tuned.daemon.controller: starting controller\n2020-04-16 10:32:12,970 INFO     tuned.daemon.daemon: starting tuning\n2020-04-16 10:32:12,988 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-16 10:32:12,989 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-16 10:32:12,997 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-16 10:32:13,001 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-16 10:32:13,006 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-16 10:32:13,164 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-16 10:32:13,176 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0416 10:45:26.029052   81555 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0416 10:45:26.033603   81555 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0416 10:45:27.700267   81555 tuned.go:513] profile "ip-10-0-139-50.us-west-1.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0416 10:45:27.732517   81555 tuned.go:554] tuned "rendered" changed\nI0416 10:45:27.732543   81555 tuned.go:224] extracting tuned profiles\nI0416 10:45:27.732552   81555 tuned.go:417] getting recommended profile...\nI0416 10:45:27.892789   81555 tuned.go:258] recommended tuned profile openshift-control-plane content unchanged\nI0416 10:45:28.566129   81555 tuned.go:417] getting recommended profile...\nI0416 10:45:28.731004   81555 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Apr 16 10:51:12.870 E ns/openshift-sdn pod/sdn-k6fx9 node/ip-10-0-139-50.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:51:16.395 E ns/openshift-multus pod/multus-pk897 node/ip-10-0-139-50.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:51:19.897 E ns/openshift-multus pod/multus-pk897 node/ip-10-0-139-50.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:51:20.892 E ns/openshift-machine-config-operator pod/machine-config-daemon-s5zdw node/ip-10-0-139-50.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 16 10:51:20.945 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-133-187.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-16T10:51:18.758Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-16T10:51:18.765Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-16T10:51:18.767Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-16T10:51:18.769Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-16T10:51:18.769Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-16T10:51:18.769Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-16T10:51:18.769Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-16T10:51:18.769Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-16T10:51:18.769Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-16T10:51:18.769Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-16T10:51:18.769Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-16T10:51:18.769Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-16T10:51:18.769Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-16T10:51:18.769Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-16T10:51:18.770Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-16T10:51:18.770Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-16
Apr 16 10:51:23.927 E ns/openshift-multus pod/multus-pk897 node/ip-10-0-139-50.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:51:37.797 E ns/openshift-insights pod/insights-operator-76c4c96dcd-4c9dd node/ip-10-0-146-104.us-west-1.compute.internal container=operator container exited with code 2 (Error): ng events/openshift-apiserver with fingerprint=\nI0416 10:50:59.673483       1 request.go:565] Throttling request took 184.516207ms, request: GET:https://172.30.0.1:443/api/v1/nodes\nI0416 10:50:59.678698       1 diskrecorder.go:63] Recording config/node/ip-10-0-139-50.us-west-1.compute.internal with fingerprint=\nI0416 10:50:59.684615       1 diskrecorder.go:63] Recording config/version with fingerprint=\nI0416 10:50:59.684711       1 diskrecorder.go:63] Recording config/id with fingerprint=\nI0416 10:50:59.690105       1 diskrecorder.go:63] Recording config/infrastructure with fingerprint=\nI0416 10:50:59.693353       1 diskrecorder.go:63] Recording config/network with fingerprint=\nI0416 10:50:59.696494       1 diskrecorder.go:63] Recording config/authentication with fingerprint=\nI0416 10:50:59.700701       1 diskrecorder.go:63] Recording config/featuregate with fingerprint=\nI0416 10:50:59.752735       1 diskrecorder.go:63] Recording config/oauth with fingerprint=\nI0416 10:50:59.755802       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0416 10:50:59.758871       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0416 10:50:59.759156       1 diskrecorder.go:170] Writing 50 records to /var/lib/insights-operator/insights-2020-04-16-105059.tar.gz\nI0416 10:50:59.767466       1 diskrecorder.go:134] Wrote 50 records to disk in 8ms\nI0416 10:50:59.767496       1 periodic.go:151] Periodic gather config completed in 1.137s\nI0416 10:51:06.587342       1 httplog.go:90] GET /metrics: (5.930147ms) 200 [Prometheus/2.15.2 10.131.0.23:55498]\nI0416 10:51:22.831211       1 status.go:298] The operator is healthy\nI0416 10:51:22.860999       1 configobserver.go:65] Refreshing configuration from cluster pull secret\nI0416 10:51:22.865558       1 configobserver.go:90] Found cloud.openshift.com token\nI0416 10:51:22.865584       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0416 10:51:35.954001       1 httplog.go:90] GET /metrics: (92.573994ms) 200 [Prometheus/2.15.2 10.128.2.17:40240]\n
Apr 16 10:51:40.676 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-55b4748b49-5f9b7 node/ip-10-0-146-104.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): 74] Shutting down StaticPodStateController ...\nI0416 10:51:39.530868       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0416 10:51:39.530883       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0416 10:51:39.530899       1 base_controller.go:74] Shutting down InstallerController ...\nI0416 10:51:39.530913       1 status_controller.go:212] Shutting down StatusSyncer-kube-scheduler\nI0416 10:51:39.530928       1 target_config_reconciler.go:126] Shutting down TargetConfigReconciler\nI0416 10:51:39.530945       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0416 10:51:39.530958       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0416 10:51:39.531001       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0416 10:51:39.531023       1 base_controller.go:39] All NodeController workers have been terminated\nI0416 10:51:39.531042       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0416 10:51:39.531051       1 base_controller.go:39] All RevisionController workers have been terminated\nI0416 10:51:39.531077       1 base_controller.go:49] Shutting down worker of  controller ...\nI0416 10:51:39.531095       1 base_controller.go:39] All  workers have been terminated\nI0416 10:51:39.531114       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0416 10:51:39.531134       1 base_controller.go:39] All PruneController workers have been terminated\nI0416 10:51:39.531151       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0416 10:51:39.531160       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0416 10:51:39.531183       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0416 10:51:39.531203       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nF0416 10:51:39.531427       1 builder.go:243] stopped\n
Apr 16 10:51:41.678 E ns/openshift-machine-config-operator pod/machine-config-operator-67bf4b9f6d-2nrb4 node/ip-10-0-146-104.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): I0416 10:45:00.347875       1 start.go:45] Version: machine-config-daemon-4.4.0-202004090539-2-gaea58635-dirty (aea586355d17e7587947a798421462cfab8538f4)\nI0416 10:45:00.350885       1 leaderelection.go:242] attempting to acquire leader lease  openshift-machine-config-operator/machine-config...\nE0416 10:46:56.014733       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"d5406168-64ec-40a9-8965-36ba4d323132", ResourceVersion:"36233", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722628542, loc:(*time.Location)(0x27f8000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-67bf4b9f6d-2nrb4_0ff130c1-271b-431f-82c7-5e42e3944709\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-04-16T10:46:56Z\",\"renewTime\":\"2020-04-16T10:46:56Z\",\"leaderTransitions\":2}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-67bf4b9f6d-2nrb4_0ff130c1-271b-431f-82c7-5e42e3944709 became leader'\nI0416 10:46:56.014829       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0416 10:46:56.719763       1 operator.go:264] Starting MachineConfigOperator\n
Apr 16 10:51:42.754 E ns/openshift-machine-config-operator pod/machine-config-controller-b4666b74b-kgk8z node/ip-10-0-146-104.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): pute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-325e3c1dce12b375552cad19979aeb2c\nI0416 10:51:04.539092       1 node_controller.go:452] Pool worker: node ip-10-0-134-80.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0416 10:51:04.559746       1 node_controller.go:433] Pool worker: node ip-10-0-134-80.us-west-1.compute.internal is now reporting unready: node ip-10-0-134-80.us-west-1.compute.internal is reporting Unschedulable\nI0416 10:51:08.575156       1 node_controller.go:433] Pool master: node ip-10-0-139-50.us-west-1.compute.internal is now reporting unready: node ip-10-0-139-50.us-west-1.compute.internal is reporting NotReady=False\nI0416 10:51:28.305282       1 node_controller.go:433] Pool master: node ip-10-0-139-50.us-west-1.compute.internal is now reporting unready: node ip-10-0-139-50.us-west-1.compute.internal is reporting Unschedulable\nI0416 10:51:29.393839       1 node_controller.go:442] Pool master: node ip-10-0-139-50.us-west-1.compute.internal has completed update to rendered-master-2d2632f2f49ac51680b433fdbbc7117e\nI0416 10:51:29.414077       1 node_controller.go:435] Pool master: node ip-10-0-139-50.us-west-1.compute.internal is now reporting ready\nI0416 10:51:33.305778       1 node_controller.go:758] Setting node ip-10-0-146-104.us-west-1.compute.internal to desired config rendered-master-2d2632f2f49ac51680b433fdbbc7117e\nI0416 10:51:33.336841       1 node_controller.go:452] Pool master: node ip-10-0-146-104.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-2d2632f2f49ac51680b433fdbbc7117e\nI0416 10:51:34.361300       1 node_controller.go:452] Pool master: node ip-10-0-146-104.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0416 10:51:34.396328       1 node_controller.go:433] Pool master: node ip-10-0-146-104.us-west-1.compute.internal is now reporting unready: node ip-10-0-146-104.us-west-1.compute.internal is reporting Unschedulable\n
Apr 16 10:52:02.033 E ns/openshift-console pod/console-75cc897ff5-s6t2s node/ip-10-0-146-104.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020-04-16T10:34:21Z cmd/main: cookies are secure!\n2020-04-16T10:34:21Z cmd/main: Binding to [::]:8443...\n2020-04-16T10:34:21Z cmd/main: using TLS\n2020-04-16T10:45:30Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-04-16T10:45:36Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-04-16T10:45:38Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-04-16T10:48:54Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: dial tcp 172.30.0.1:443: connect: connection refused\n2020-04-16T10:49:00Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-04-16T10:49:05Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-04-16T10:49:10Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 16 10:52:21.503 E kube-apiserver Kube API started failing: Get https://api.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 16 10:52:21.503 - 14s   E kube-apiserver Kube API is not responding to GET requests
Apr 16 10:52:36.503 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 16 10:52:49.010 E clusteroperator/monitoring changed Degraded to True: UpdatingThanosQuerierFailed: Failed to rollout the stack. Error: running task Updating Thanos Querier failed: reconciling Thanos Querier ServiceAccount failed: retrieving ServiceAccount object failed: etcdserver: request timed out
Apr 16 10:52:49.325 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-5cfb9d64bc-lfccc node/ip-10-0-131-84.us-west-1.compute.internal container=manager container exited with code 1 (Error): Copying system trust bundle\ntime="2020-04-16T10:52:28Z" level=debug msg="debug logging enabled"\ntime="2020-04-16T10:52:28Z" level=info msg="setting up client for manager"\ntime="2020-04-16T10:52:28Z" level=info msg="setting up manager"\ntime="2020-04-16T10:52:34Z" level=info msg="registering components"\ntime="2020-04-16T10:52:34Z" level=info msg="setting up scheme"\ntime="2020-04-16T10:52:34Z" level=info msg="setting up controller"\ntime="2020-04-16T10:52:41Z" level=fatal msg="etcdserver: leader changed"\n
Apr 16 10:53:50.335 E ns/openshift-monitoring pod/node-exporter-f4bs8 node/ip-10-0-134-80.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:34Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:52:04Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 16 10:53:50.371 E ns/openshift-cluster-node-tuning-operator pod/tuned-rpzzp node/ip-10-0-134-80.us-west-1.compute.internal container=tuned container exited with code 143 (Error):  unexpected EOF\nI0416 10:45:27.997058   48707 tuned.go:554] tuned "rendered" changed\nI0416 10:45:27.997087   48707 tuned.go:224] extracting tuned profiles\nI0416 10:45:27.997097   48707 tuned.go:417] getting recommended profile...\nI0416 10:45:27.998820   48707 tuned.go:513] profile "ip-10-0-134-80.us-west-1.compute.internal" changed, tuned profile requested: openshift-node\nI0416 10:45:28.110363   48707 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0416 10:45:28.335450   48707 tuned.go:417] getting recommended profile...\nI0416 10:45:28.450198   48707 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\nI0416 10:48:54.751745   48707 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0416 10:48:54.759256   48707 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0416 10:48:54.760106   48707 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:605: Failed to watch *v1.Tuned: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/tuneds?allowWatchBookmarks=true&fieldSelector=metadata.name%3Drendered&resourceVersion=35646&timeoutSeconds=390&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0416 10:48:57.364656   48707 tuned.go:513] profile "ip-10-0-134-80.us-west-1.compute.internal" changed, tuned profile requested: openshift-node\nI0416 10:48:57.530735   48707 tuned.go:554] tuned "rendered" changed\nI0416 10:48:57.530760   48707 tuned.go:224] extracting tuned profiles\nI0416 10:48:57.530770   48707 tuned.go:417] getting recommended profile...\nI0416 10:48:57.645597   48707 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0416 10:48:58.335420   48707 tuned.go:417] getting recommended profile...\nI0416 10:48:58.448681   48707 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\n
Apr 16 10:53:50.424 E ns/openshift-multus pod/multus-q5zp5 node/ip-10-0-134-80.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 16 10:53:50.458 E ns/openshift-machine-config-operator pod/machine-config-daemon-vrwhd node/ip-10-0-134-80.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 16 10:53:52.234 E kube-apiserver failed contacting the API: Get https://api.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=39301&timeout=5m47s&timeoutSeconds=347&watch=true: dial tcp 50.18.212.81:6443: connect: connection refused
Apr 16 10:53:53.409 E ns/openshift-sdn pod/sdn-86vhn node/ip-10-0-134-80.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:53:54.487 E ns/openshift-multus pod/multus-q5zp5 node/ip-10-0-134-80.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:53:55.602 E ns/openshift-machine-config-operator pod/machine-config-daemon-vrwhd node/ip-10-0-134-80.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 16 10:53:55.602 E ns/openshift-machine-config-operator pod/machine-config-daemon-vrwhd node/ip-10-0-134-80.us-west-1.compute.internal container=machine-config-daemon container exited with code 255 (Error): I0416 10:53:54.556798    2214 start.go:74] Version: machine-config-daemon-4.4.0-202004090539-2-gaea58635-dirty (aea586355d17e7587947a798421462cfab8538f4)\nI0416 10:53:54.590768    2214 start.go:84] Calling chroot("/rootfs")\nI0416 10:53:54.591713    2214 rpm-ostree.go:366] Running captured: rpm-ostree status --json\nI0416 10:53:54.934875    2214 daemon.go:209] Booted osImageURL: registry.svc.ci.openshift.org/ci-op-kgh86rjz/stable-initial@sha256:b47a2bccd4d516f92a96edb2bab6975265fb16e7cc3eddfbc4a11e723e70206f (44.81.202004151531-0)\nF0416 10:53:54.935928    2214 start.go:128] Failed to initialize ClientBuilder: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined\n
Apr 16 10:53:56.746 E ns/openshift-multus pod/multus-q5zp5 node/ip-10-0-134-80.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:54:05.250 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-84.us-west-1.compute.internal node/ip-10-0-131-84.us-west-1.compute.internal container=kube-controller-manager container exited with code 255 (Error): etadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: unknown\nE0416 10:53:58.233829       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nI0416 10:54:02.391449       1 node_lifecycle_controller.go:1137] node ip-10-0-146-104.us-west-1.compute.internal hasn't been updated for 1m0.113910283s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2020-04-16 10:50:42 +0000 UTC,LastTransitionTime:2020-04-16 10:53:42 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}\nI0416 10:54:02.391507       1 node_lifecycle_controller.go:1137] node ip-10-0-146-104.us-west-1.compute.internal hasn't been updated for 1m0.113975478s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2020-04-16 10:50:42 +0000 UTC,LastTransitionTime:2020-04-16 10:53:42 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}\nI0416 10:54:02.391535       1 node_lifecycle_controller.go:1137] node ip-10-0-146-104.us-west-1.compute.internal hasn't been updated for 1m0.114004106s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2020-04-16 10:50:42 +0000 UTC,LastTransitionTime:2020-04-16 10:53:42 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}\nI0416 10:54:02.391559       1 node_lifecycle_controller.go:1137] node ip-10-0-146-104.us-west-1.compute.internal hasn't been updated for 1m0.114028561s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2020-04-16 10:50:42 +0000 UTC,LastTransitionTime:2020-04-16 10:53:42 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}\nI0416 10:54:02.525270       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded\nF0416 10:54:02.525396       1 controllermanager.go:291] leaderelection lost\n
Apr 16 10:54:05.497 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-5cfb9d64bc-lfccc node/ip-10-0-131-84.us-west-1.compute.internal container=manager container exited with code 1 (Error): cing cluster operator status" controller=credreq_status\ntime="2020-04-16T10:53:13Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-04-16T10:53:13Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-04-16T10:53:13Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-04-16T10:53:13Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-04-16T10:53:13Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-04-16T10:53:39Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\nE0416 10:53:52.050213       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to watch *v1.Secret: Get https://172.30.0.1:443/api/v1/secrets?resourceVersion=43404&timeoutSeconds=559&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0416 10:53:52.050670       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to watch *v1.CredentialsRequest: Get https://172.30.0.1:443/apis/cloudcredential.openshift.io/v1/credentialsrequests?resourceVersion=39307&timeoutSeconds=308&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0416 10:53:52.099709       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/configmaps?resourceVersion=43616&timeoutSeconds=457&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\ntime="2020-04-16T10:54:03Z" level=fatal msg="unable to run the manager" error="leader election lost"\n
Apr 16 10:54:06.109 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6fcddd475-mrkjx node/ip-10-0-151-181.us-west-1.compute.internal container=snapshot-controller container exited with code 255 (Error): 
Apr 16 10:54:11.255 E clusteroperator/authentication changed Degraded to True: RouteHealth_FailedGet::WellKnownEndpoint_Error: RouteHealthDegraded: failed to GET route: dial tcp: lookup oauth-openshift.apps.ci-op-kgh86rjz-1d6bd.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: read udp 10.129.0.14:37864->172.30.0.10:53: i/o timeout\nWellKnownEndpointDegraded: failed to GET well-known https://10.0.146.104:6443/.well-known/oauth-authorization-server: dial tcp 10.0.146.104:6443: connect: connection refused
Apr 16 10:54:15.907 E ns/openshift-machine-config-operator pod/machine-config-daemon-vrwhd node/ip-10-0-134-80.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 16 10:54:21.492 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): :48.294177       1 store.go:1342] Monitoring clusterserviceversions.operators.coreos.com count at <storage-prefix>//operators.coreos.com/clusterserviceversions\nI0416 10:51:48.358637       1 trace.go:116] Trace[1395172365]: "List" url:/api/v1/configmaps,user-agent:catalog/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.0.139.50 (started: 2020-04-16 10:51:47.612745847 +0000 UTC m=+1049.641675901) (total time: 745.848109ms):\nTrace[1395172365]: [745.844829ms] [745.509583ms] Writing http response done count:415\nI0416 10:51:48.414595       1 client.go:361] parsed scheme: "endpoint"\nI0416 10:51:48.414735       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://10.0.131.84:2379 0  <nil>} {https://10.0.139.50:2379 0  <nil>} {https://10.0.146.104:2379 0  <nil>} {https://localhost:2379 0  <nil>}]\nI0416 10:51:48.427899       1 store.go:1342] Monitoring subscriptions.operators.coreos.com count at <storage-prefix>//operators.coreos.com/subscriptions\nE0416 10:51:51.398451       1 available_controller.go:415] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0416 10:51:51.479410       1 available_controller.go:415] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0416 10:52:06.772801       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-146-104.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0416 10:52:06.772990       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Apr 16 10:54:21.492 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0416 10:32:41.674379       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 16 10:54:21.492 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0416 10:52:02.428638       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:52:02.429072       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0416 10:52:06.741711       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:52:06.742178       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 16 10:54:21.492 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error):  1 client_cert_rotation_controller.go:140] Starting CertRotationController - "KubeControllerManagerClient"\nI0416 10:45:31.956372       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - "KubeControllerManagerClient"\nI0416 10:45:31.956377       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "KubeControllerManagerClient"\nI0416 10:52:06.733528       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:52:06.733994       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0416 10:52:06.734034       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0416 10:52:06.734053       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0416 10:52:06.734068       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0416 10:52:06.734080       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0416 10:52:06.734096       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0416 10:52:06.734109       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0416 10:52:06.734122       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0416 10:52:06.734134       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0416 10:52:06.734155       1 certrotationcontroller.go:556] Shutting down CertRotation\nI0416 10:52:06.734168       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0416 10:52:06.734177       1 cabundlesyncer.go:86] CA bundle controller shut down\nF0416 10:52:06.742119       1 leaderelection.go:67] leaderelection lost\n
Apr 16 10:54:21.509 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=cluster-policy-controller container exited with code 1 (Error): e=operatorpkis": unable to monitor quota for resource "network.operator.openshift.io/v1, Resource=operatorpkis", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans", couldn't start monitor for resource "template.openshift.io/v1, Resource=templateinstances": unable to monitor quota for resource "template.openshift.io/v1, Resource=templateinstances"]\nI0416 10:50:09.365339       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0416 10:50:09.365368       1 policy_controller.go:147] Started Origin Controllers\nI0416 10:50:09.365370       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0416 10:50:09.365673       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0416 10:50:09.369679       1 resource_quota_monitor.go:303] QuotaMonitor running\nI0416 10:50:09.382187       1 shared_informer.go:204] Caches are synced for resource quota \nW0416 10:51:35.384299       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 299; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:51:35.390032       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 313; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:51:35.390117       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 301; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 16 10:54:21.509 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 1 (Error): tch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=24564&timeout=5m33s&timeoutSeconds=333&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:34:17.690631       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=25257&timeout=6m20s&timeoutSeconds=380&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:34:17.692617       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=25712&timeoutSeconds=504&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:34:17.693718       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=24564&timeout=9m27s&timeoutSeconds=567&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0416 10:34:17.694834       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=29638&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0416 10:52:06.825021       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0416 10:52:06.825863       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\nI0416 10:52:06.825933       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0416 10:52:06.825975       1 csrcontroller.go:100] Shutting down CSR controller\nI0416 10:52:06.826003       1 csrcontroller.go:102] CSR controller shut down\n
Apr 16 10:54:21.509 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-controller-manager container exited with code 2 (Error): 6443: connect: connection refused\nE0416 10:34:09.794933       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0416 10:34:15.106906       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0416 10:34:22.652059       1 webhook.go:109] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\nE0416 10:34:22.652646       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope]\nE0416 10:34:22.652605       1 webhook.go:109] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\nE0416 10:34:22.652956       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope]\nE0416 10:34:22.750780       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Apr 16 10:54:21.509 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 7796       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:51:34.358162       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:51:36.721699       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:51:36.722030       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:51:44.366311       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:51:44.366678       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:51:46.762585       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:51:46.762940       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:51:54.377407       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:51:54.377786       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:51:56.789856       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:51:56.790453       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0416 10:52:04.389783       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0416 10:52:04.390193       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 16 10:54:21.537 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-scheduler container exited with code 2 (Error): .; waiting\nE0416 10:51:56.360729       1 factory.go:494] pod is already present in the activeQ\nI0416 10:51:56.365615       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-58c94c75cb-f8fkv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0416 10:51:58.337665       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-58c94c75cb-f8fkv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0416 10:51:58.353203       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-6b858b5898-j64s5: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0416 10:52:03.934642       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-6b858b5898-j64s5: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0416 10:52:03.943890       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-58c94c75cb-f8fkv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0416 10:52:04.142326       1 scheduler.go:751] pod openshift-operator-lifecycle-manager/packageserver-7496cc9f54-8www2 is bound successfully on node "ip-10-0-131-84.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0416 10:52:06.339333       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-6b858b5898-j64s5: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Apr 16 10:54:21.537 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:51:47.896727       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:51:47.896823       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:51:49.873860       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:51:49.873994       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:51:51.888402       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:51:51.888431       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:51:53.900793       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:51:53.900825       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:51:55.921577       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:51:55.921681       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:51:57.932883       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:51:57.932911       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:51:59.941560       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:51:59.941587       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:52:01.953173       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:52:01.953210       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:52:03.965011       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:52:03.965046       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0416 10:52:05.982025       1 certsync_controller.go:65] Syncing configmaps: []\nI0416 10:52:05.982059       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 16 10:54:21.633 E ns/openshift-controller-manager pod/controller-manager-q2lkh node/ip-10-0-146-104.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error):  172.30.0.1:443: connect: connection refused\nE0416 10:48:54.773425       1 reflector.go:320] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: Get https://172.30.0.1:443/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=34373&timeout=9m44s&timeoutSeconds=584&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0416 10:51:35.383657       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 25; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:51:35.383883       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 15; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:51:35.384003       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 47; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:51:35.388210       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 53; INTERNAL_ERROR") has prevented the request from succeeding\nW0416 10:51:35.388329       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 11; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 16 10:54:21.683 E ns/openshift-monitoring pod/node-exporter-xp6m8 node/ip-10-0-146-104.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:18Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:33Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:43Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:48Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:51:58Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-16T10:52:03Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 16 10:54:21.821 E ns/openshift-cluster-node-tuning-operator pod/tuned-shqv9 node/ip-10-0-146-104.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 416 10:48:54.757074   83395 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0416 10:48:54.779579   83395 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0416 10:48:54.804583   83395 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:605: Failed to watch *v1.Tuned: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/tuneds?allowWatchBookmarks=true&fieldSelector=metadata.name%3Drendered&resourceVersion=35646&timeoutSeconds=390&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0416 10:48:57.453881   83395 tuned.go:513] profile "ip-10-0-146-104.us-west-1.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0416 10:48:57.597078   83395 tuned.go:554] tuned "rendered" changed\nI0416 10:48:57.597107   83395 tuned.go:224] extracting tuned profiles\nI0416 10:48:57.597115   83395 tuned.go:417] getting recommended profile...\nI0416 10:48:57.731040   83395 tuned.go:258] recommended tuned profile openshift-control-plane content unchanged\nI0416 10:48:57.884605   83395 tuned.go:417] getting recommended profile...\nI0416 10:48:58.018340   83395 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0416 10:51:47.406957   83395 tuned.go:513] profile "ip-10-0-146-104.us-west-1.compute.internal" changed, tuned profile requested: openshift-node\nI0416 10:51:47.445405   83395 tuned.go:513] profile "ip-10-0-146-104.us-west-1.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0416 10:51:47.885420   83395 tuned.go:417] getting recommended profile...\nI0416 10:51:48.264765   83395 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0416 10:52:06.748240   83395 tuned.go:114] received signal: terminated\nI0416 10:52:06.748434   83395 tuned.go:351] sending TERM to PID 83425\n
Apr 16 10:54:21.868 E ns/openshift-sdn pod/sdn-controller-lhpk6 node/ip-10-0-146-104.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0416 10:35:06.794697       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 16 10:54:21.882 E ns/openshift-multus pod/multus-admission-controller-l9s5d node/ip-10-0-146-104.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 16 10:54:21.900 E ns/openshift-multus pod/multus-lwmgs node/ip-10-0-146-104.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 16 10:54:21.933 E ns/openshift-machine-config-operator pod/machine-config-daemon-nqdvw node/ip-10-0-146-104.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 16 10:54:21.945 E ns/openshift-machine-config-operator pod/machine-config-server-9tkl2 node/ip-10-0-146-104.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0416 10:44:59.668184       1 start.go:38] Version: machine-config-daemon-4.4.0-202004090539-2-gaea58635-dirty (aea586355d17e7587947a798421462cfab8538f4)\nI0416 10:44:59.669671       1 api.go:51] Launching server on :22624\nI0416 10:44:59.669718       1 api.go:51] Launching server on :22623\n
Apr 16 10:54:26.803 E ns/openshift-etcd pod/etcd-ip-10-0-146-104.us-west-1.compute.internal node/ip-10-0-146-104.us-west-1.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-16 10:27:55.609230 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-146-104.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-146-104.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-16 10:27:55.610070 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-16 10:27:55.610488 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-146-104.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-146-104.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/16 10:27:55 grpc: addrConn.createTransport failed to connect to {https://10.0.146.104:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.146.104:9978: connect: connection refused". Reconnecting...\n2020-04-16 10:27:55.612416 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/16 10:27:56 grpc: addrConn.createTransport failed to connect to {https://10.0.146.104:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.146.104:9978: connect: connection refused". Reconnecting...\n
Apr 16 10:54:28.096 E ns/openshift-multus pod/multus-lwmgs node/ip-10-0-146-104.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:54:30.207 E ns/openshift-multus pod/multus-lwmgs node/ip-10-0-146-104.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:54:33.251 E ns/openshift-machine-config-operator pod/machine-config-daemon-nqdvw node/ip-10-0-146-104.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 16 10:54:35.149 E ns/openshift-console-operator pod/console-operator-6f76f87ff7-z6lgk node/ip-10-0-131-84.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): ceVersion=42316&timeout=7m43s&timeoutSeconds=463&watch=true: dial tcp 172.30.0.1:443: i/o timeout\nE0416 10:54:23.240377       1 reflector.go:307] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterOperator: Get https://172.30.0.1:443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=43533&timeout=9m43s&timeoutSeconds=583&watch=true: dial tcp 172.30.0.1:443: i/o timeout\nE0416 10:54:23.248058       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=42800&timeout=9m14s&timeoutSeconds=554&watch=true: dial tcp 172.30.0.1:443: i/o timeout\nE0416 10:54:23.248127       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/openshift-console/configmaps?allowWatchBookmarks=true&resourceVersion=42800&timeout=8m32s&timeoutSeconds=512&watch=true: dial tcp 172.30.0.1:443: i/o timeout\nE0416 10:54:23.248314       1 reflector.go:307] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: Get https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dconsole&resourceVersion=41860&timeout=9m46s&timeoutSeconds=586&watch=true: dial tcp 172.30.0.1:443: i/o timeout\nE0416 10:54:29.076404       1 leaderelection.go:331] error retrieving resource lock openshift-console-operator/console-operator-lock: Get https://172.30.0.1:443/api/v1/namespaces/openshift-console-operator/configmaps/console-operator-lock?timeout=35s: dial tcp 172.30.0.1:443: i/o timeout\nI0416 10:54:34.049106       1 leaderelection.go:288] failed to renew lease openshift-console-operator/console-operator-lock: timed out waiting for the condition\nF0416 10:54:34.049196       1 leaderelection.go:67] leaderelection lost\n
Apr 16 10:54:35.212 E ns/openshift-multus pod/multus-lwmgs node/ip-10-0-146-104.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Apr 16 10:54:49.453 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: ip-10-0-139-50.us-west-1.compute.internal,ip-10-0-146-104.us-west-1.compute.internal members are unhealthy,  members are unknown
Apr 16 10:58:55.468 E ns/openshift-cluster-node-tuning-operator pod/tuned-rpzzp node/ip-10-0-134-80.us-west-1.compute.internal container=tuned container exited with code 2 (Error): E0416 10:53:54.726483    2399 tuned.go:696] cannot stat kubeconfig "/root/.kube/config"\nI0416 10:53:54.726575    2399 tuned.go:698] increased retry period to 20\nE0416 10:54:14.726909    2399 tuned.go:696] cannot stat kubeconfig "/root/.kube/config"\nI0416 10:54:14.726939    2399 tuned.go:698] increased retry period to 40\nE0416 10:54:54.727451    2399 tuned.go:696] cannot stat kubeconfig "/root/.kube/config"\nI0416 10:54:54.727475    2399 tuned.go:698] increased retry period to 80\nE0416 10:56:14.727744    2399 tuned.go:696] cannot stat kubeconfig "/root/.kube/config"\nI0416 10:56:14.727771    2399 tuned.go:698] increased retry period to 160\nE0416 10:58:54.728088    2399 tuned.go:696] cannot stat kubeconfig "/root/.kube/config"\nI0416 10:58:54.728117    2399 tuned.go:698] increased retry period to 320\nE0416 10:58:54.728131    2399 tuned.go:702] seen 5 errors in 300 seconds (limit was 610), terminating...\npanic: cannot stat kubeconfig "/root/.kube/config"\n\ngoroutine 1 [running]:\ngithub.com/openshift/cluster-node-tuning-operator/pkg/tuned.Run(0xc000043832, 0x16d99e8, 0xd)\n	/go/src/github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:743 +0x1cf\nmain.main()\n	/go/src/github.com/openshift/cluster-node-tuning-operator/cmd/cluster-node-tuning-operator/main.go:60 +0x343\n