ResultSUCCESS
Tests 5 failed / 20 succeeded
Started2020-04-17 17:19
Elapsed1h54m
Work namespaceci-op-tzfqnxlq
Refs release-4.4:322a876e
125:6289f754
pod94b1b6d0-80cf-11ea-95fc-0a58ac104965
repoopenshift/cluster-node-tuning-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 40m48s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 10s of 37m5s (0%):

Apr 17 18:39:53.985 E ns/e2e-k8s-service-lb-available-8247 svc/service-test Service stopped responding to GET requests on reused connections
Apr 17 18:39:54.156 I ns/e2e-k8s-service-lb-available-8247 svc/service-test Service started responding to GET requests on reused connections
Apr 17 18:40:37.985 E ns/e2e-k8s-service-lb-available-8247 svc/service-test Service stopped responding to GET requests over new connections
Apr 17 18:40:38.984 - 7s    E ns/e2e-k8s-service-lb-available-8247 svc/service-test Service is not responding to GET requests over new connections
Apr 17 18:40:47.230 I ns/e2e-k8s-service-lb-available-8247 svc/service-test Service started responding to GET requests over new connections
Apr 17 18:42:02.985 E ns/e2e-k8s-service-lb-available-8247 svc/service-test Service stopped responding to GET requests on reused connections
Apr 17 18:42:03.166 I ns/e2e-k8s-service-lb-available-8247 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1587150319.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 40m17s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 32s of 40m16s (1%):

Apr 17 18:40:18.432 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 17 18:40:18.805 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 17 18:40:24.432 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 17 18:40:24.791 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 17 18:40:34.432 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 17 18:40:34.786 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 17 18:41:13.432 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 17 18:41:13.432 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Apr 17 18:41:13.432 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Apr 17 18:41:13.432 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 17 18:41:14.431 - 3s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Apr 17 18:41:14.431 - 3s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Apr 17 18:41:14.431 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Apr 17 18:41:14.431 - 9s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Apr 17 18:41:18.775 I ns/openshift-console route/console Route started responding to GET requests over new connections
Apr 17 18:41:18.791 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 17 18:41:23.693 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Apr 17 18:41:23.694 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
				from junit_upgrade_1587150319.xml

Filter through log files


Cluster upgrade Kubernetes APIs remain available 40m17s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 2s of 40m16s (0%):

Apr 17 18:57:47.250 E kube-apiserver Kube API started failing: Get https://api.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 52.12.236.241:6443: connect: connection refused
Apr 17 18:57:48.051 E kube-apiserver Kube API is not responding to GET requests
Apr 17 18:57:48.183 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1587150319.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 40m17s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 57s of 40m16s (2%):

Apr 17 18:40:04.907 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 17 18:40:04.996 I openshift-apiserver OpenShift API started responding to GET requests
Apr 17 18:40:20.908 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 17 18:40:21.907 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 17 18:40:35.994 I openshift-apiserver OpenShift API started responding to GET requests
Apr 17 18:40:51.907 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 17 18:40:52.907 - 28s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 17 18:41:21.994 I openshift-apiserver OpenShift API started responding to GET requests
Apr 17 18:41:38.907 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 17 18:41:39.907 - 13s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 17 18:41:53.996 I openshift-apiserver OpenShift API started responding to GET requests
Apr 17 18:57:47.075 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 44.233.160.250:6443: connect: connection refused
Apr 17 18:57:47.907 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 17 18:57:48.187 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1587150319.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 40m53s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
210 error level events were detected during this test run:

Apr 17 18:24:46.749 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 17 18:27:53.018 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-8457774b5c" has successfully progressed.
Apr 17 18:28:16.705 E ns/openshift-etcd-operator pod/etcd-operator-576d46dbdc-ncfxz node/ip-10-0-141-119.us-west-2.compute.internal container=operator container exited with code 255 (Error): 1 base_controller.go:49] Shutting down worker of StaticPodStateController controller ...\nI0417 18:28:15.671415       1 base_controller.go:39] All StaticPodStateController workers have been terminated\nI0417 18:28:15.671430       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0417 18:28:15.671435       1 base_controller.go:39] All RevisionController workers have been terminated\nI0417 18:28:15.671448       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0417 18:28:15.671462       1 base_controller.go:39] All NodeController workers have been terminated\nI0417 18:28:15.671475       1 base_controller.go:49] Shutting down worker of  controller ...\nI0417 18:28:15.671480       1 base_controller.go:39] All  workers have been terminated\nI0417 18:28:15.671491       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0417 18:28:15.671497       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0417 18:28:15.671507       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0417 18:28:15.671513       1 base_controller.go:39] All PruneController workers have been terminated\nI0417 18:28:15.671523       1 base_controller.go:49] Shutting down worker of  controller ...\nI0417 18:28:15.671528       1 base_controller.go:39] All  workers have been terminated\nI0417 18:28:15.671538       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0417 18:28:15.671553       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0417 18:28:15.671580       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0417 18:28:15.671589       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0417 18:28:15.671590       1 secure_serving.go:222] Stopped listening on [::]:8443\nF0417 18:28:15.663764       1 builder.go:209] server exited\n
Apr 17 18:29:45.948 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5dccc4bd8c-b2mg8 node/ip-10-0-141-119.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): controller.go:202] Shutting down EncryptionConditionController\nI0417 18:29:45.207653       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0417 18:29:45.207662       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0417 18:29:45.207674       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0417 18:29:45.207685       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0417 18:29:45.207697       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0417 18:29:45.207720       1 base_controller.go:74] Shutting down RevisionController ...\nI0417 18:29:45.207806       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0417 18:29:45.207838       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0417 18:29:45.207849       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0417 18:29:45.207861       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0417 18:29:45.207874       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0417 18:29:45.207883       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0417 18:29:45.207893       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0417 18:29:45.207905       1 certrotationcontroller.go:556] Shutting down CertRotation\nI0417 18:29:45.207913       1 termination_observer.go:154] Shutting down TerminationObserver\nI0417 18:29:45.207930       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0417 18:29:45.208176       1 base_controller.go:74] Shutting down PruneController ...\nF0417 18:29:45.208132       1 builder.go:243] stopped\n
Apr 17 18:31:17.766 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-568c4648f9-2mhl7 node/ip-10-0-134-171.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): on:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "ip-10-0-141-119.us-west-2.compute.internal" from revision 8 to 9 because static pod is ready\nI0417 18:24:28.092118       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"7c244da1-9bfd-4879-807c-2670bcc6e984", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 9"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 8; 2 nodes are at revision 9" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9"\nI0417 18:24:29.080589       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"7c244da1-9bfd-4879-807c-2670bcc6e984", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-9 -n openshift-kube-controller-manager:\ncause by changes in data.status\nI0417 18:24:33.686728       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"7c244da1-9bfd-4879-807c-2670bcc6e984", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-9-ip-10-0-141-119.us-west-2.compute.internal -n openshift-kube-controller-manager because it was missing\nI0417 18:31:17.095128       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0417 18:31:17.095477       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0417 18:31:17.095504       1 builder.go:209] server exited\n
Apr 17 18:31:19.966 E clusteroperator/monitoring changed Degraded to True: UpdatingAlertmanagerFailed: Failed to rollout the stack. Error: running task Updating Alertmanager failed: reconciling Alertmanager ClusterRole failed: retrieving ClusterRole object failed: etcdserver: leader changed
Apr 17 18:31:27.084 E kube-apiserver Kube API started failing: Get https://api.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded
Apr 17 18:32:05.740 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 17 18:32:07.933 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0417 18:32:07.230808       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0417 18:32:07.232514       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0417 18:32:07.235044       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0417 18:32:07.235508       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\nI0417 18:32:07.235929       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Apr 17 18:32:24.934 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 17 18:32:47.000 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 17 18:33:20.131 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0417 18:33:18.956400       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0417 18:33:18.958213       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0417 18:33:18.963493       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0417 18:33:18.963780       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0417 18:33:18.968475       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 17 18:33:39.220 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0417 18:33:38.143891       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0417 18:33:38.147705       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0417 18:33:38.151420       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0417 18:33:38.152231       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\nI0417 18:33:38.153128       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Apr 17 18:34:26.144 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 17 18:34:31.174 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-c9cb6c744-86mcr node/ip-10-0-141-119.us-west-2.compute.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): "name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0417 18:15:39.363084       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"531f43d7-a8bb-48e4-8eaa-6c87649b92c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0417 18:15:39.387384       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"531f43d7-a8bb-48e4-8eaa-6c87649b92c3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nE0417 18:16:15.042666       1 leaderelection.go:331] error retrieving resource lock openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock: Get https://172.30.0.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps/openshift-kube-storage-version-migrator-operator-lock?timeout=35s: unexpected EOF\nI0417 18:34:30.129588       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0417 18:34:30.129631       1 leaderelection.go:66] leaderelection lost\n
Apr 17 18:34:41.268 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0417 18:34:40.567022       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0417 18:34:40.568445       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0417 18:34:40.570324       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0417 18:34:40.570369       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0417 18:34:40.571621       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 17 18:34:42.324 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 17 18:35:00.396 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0417 18:35:00.088928       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0417 18:35:00.090184       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0417 18:35:00.091609       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0417 18:35:00.091648       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0417 18:35:00.092064       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 17 18:35:16.409 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 17 18:36:17.660 E ns/openshift-cluster-machine-approver pod/machine-approver-7f9d5cb5cd-m2wcl node/ip-10-0-141-119.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): sts?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0417 18:35:53.168941       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0417 18:35:54.169450       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0417 18:35:55.169953       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0417 18:35:56.172246       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0417 18:35:57.172796       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0417 18:35:58.173280       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Apr 17 18:36:21.691 E ns/openshift-insights pod/insights-operator-5c5cd6db56-s2lrv node/ip-10-0-154-74.us-west-2.compute.internal container=operator container exited with code 2 (Error): go:90] GET /metrics: (1.860605ms) 200 [Prometheus/2.15.2 10.128.2.22:59080]\nI0417 18:34:48.265082       1 httplog.go:90] GET /metrics: (6.470214ms) 200 [Prometheus/2.15.2 10.129.2.10:48042]\nI0417 18:34:54.282763       1 httplog.go:90] GET /metrics: (2.121162ms) 200 [Prometheus/2.15.2 10.128.2.22:59080]\nI0417 18:35:09.315629       1 status.go:298] The operator is healthy\nI0417 18:35:18.269688       1 httplog.go:90] GET /metrics: (10.715828ms) 200 [Prometheus/2.15.2 10.129.2.10:48042]\nI0417 18:35:24.282058       1 httplog.go:90] GET /metrics: (1.478942ms) 200 [Prometheus/2.15.2 10.128.2.22:59080]\nI0417 18:35:32.124875       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0417 18:35:32.124999       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0417 18:35:32.454262       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 24928 (26004)\nI0417 18:35:32.454597       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 24932 (26004)\nI0417 18:35:33.454755       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0417 18:35:33.454854       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0417 18:35:48.264642       1 httplog.go:90] GET /metrics: (6.037721ms) 200 [Prometheus/2.15.2 10.129.2.10:48042]\nI0417 18:35:54.282060       1 httplog.go:90] GET /metrics: (1.455185ms) 200 [Prometheus/2.15.2 10.128.2.22:59080]\nI0417 18:36:18.265138       1 httplog.go:90] GET /metrics: (6.535135ms) 200 [Prometheus/2.15.2 10.129.2.10:48042]\n
Apr 17 18:36:22.061 E ns/openshift-kube-storage-version-migrator pod/migrator-99b8668f9-vk2hb node/ip-10-0-139-211.us-west-2.compute.internal container=migrator container exited with code 2 (Error): I0417 18:18:29.168170       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0417 18:22:48.412144       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Apr 17 18:36:39.109 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-139-211.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/04/17 18:22:50 Watching directory: "/etc/alertmanager/config"\n
Apr 17 18:36:39.109 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-139-211.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/04/17 18:22:50 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/17 18:22:50 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/17 18:22:50 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/17 18:22:50 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/17 18:22:50 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/17 18:22:50 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/17 18:22:50 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0417 18:22:50.994611       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/17 18:22:50 http.go:107: HTTPS: listening on [::]:9095\n
Apr 17 18:36:42.114 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-846d6c6b6d-rskrn node/ip-10-0-139-211.us-west-2.compute.internal container=operator container exited with code 255 (Error): 03994       1 operator.go:147] Finished syncing operator at 17.593551ms\nI0417 18:36:16.616840       1 operator.go:145] Starting syncing operator at 2020-04-17 18:36:16.616831199 +0000 UTC m=+1238.808830527\nI0417 18:36:16.953769       1 operator.go:147] Finished syncing operator at 336.92554ms\nI0417 18:36:22.708080       1 operator.go:145] Starting syncing operator at 2020-04-17 18:36:22.708064996 +0000 UTC m=+1244.900064187\nI0417 18:36:22.745329       1 operator.go:147] Finished syncing operator at 37.248069ms\nI0417 18:36:22.766162       1 operator.go:145] Starting syncing operator at 2020-04-17 18:36:22.766141329 +0000 UTC m=+1244.958140696\nI0417 18:36:22.785099       1 operator.go:147] Finished syncing operator at 18.948467ms\nI0417 18:36:29.248683       1 operator.go:145] Starting syncing operator at 2020-04-17 18:36:29.248670407 +0000 UTC m=+1251.440669521\nI0417 18:36:29.277341       1 operator.go:147] Finished syncing operator at 28.659334ms\nI0417 18:36:29.289081       1 operator.go:145] Starting syncing operator at 2020-04-17 18:36:29.289072145 +0000 UTC m=+1251.481071317\nI0417 18:36:29.365598       1 operator.go:147] Finished syncing operator at 76.514475ms\nI0417 18:36:29.365657       1 operator.go:145] Starting syncing operator at 2020-04-17 18:36:29.365649676 +0000 UTC m=+1251.557649948\nI0417 18:36:29.461848       1 operator.go:147] Finished syncing operator at 96.184964ms\nI0417 18:36:29.466461       1 operator.go:145] Starting syncing operator at 2020-04-17 18:36:29.466452901 +0000 UTC m=+1251.658452011\nI0417 18:36:29.657508       1 operator.go:147] Finished syncing operator at 191.04303ms\nI0417 18:36:40.933987       1 operator.go:145] Starting syncing operator at 2020-04-17 18:36:40.933972478 +0000 UTC m=+1263.125971822\nI0417 18:36:41.001419       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0417 18:36:41.002015       1 builder.go:243] stopped\nI0417 18:36:41.006033       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\n
Apr 17 18:36:50.845 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-5dd5f5bfb-8gdn9 node/ip-10-0-141-119.us-west-2.compute.internal container=operator container exited with code 255 (Error): r-operator-lock\nI0417 18:36:16.593528       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0417 18:36:17.954112       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0417 18:36:26.574521       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0417 18:36:32.636504       1 workload_controller.go:347] No service bindings found, nothing to delete.\nI0417 18:36:32.649969       1 workload_controller.go:193] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0417 18:36:34.640645       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0417 18:36:34.640668       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0417 18:36:34.642366       1 httplog.go:90] GET /metrics: (5.691548ms) 200 [Prometheus/2.15.2 10.128.2.22:38396]\nI0417 18:36:36.583346       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0417 18:36:40.205862       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0417 18:36:40.205966       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0417 18:36:40.208498       1 httplog.go:90] GET /metrics: (2.827762ms) 200 [Prometheus/2.15.2 10.129.2.10:58870]\nI0417 18:36:46.595768       1 leaderelection.go:283] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0417 18:36:49.744479       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0417 18:36:49.744719       1 builder.go:209] server exited\nI0417 18:36:49.763988       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\n
Apr 17 18:36:53.262 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 17 18:36:53.359 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-c46db57d4-xm2zf node/ip-10-0-151-146.us-west-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Apr 17 18:36:56.382 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-146.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/04/17 18:23:02 Watching directory: "/etc/alertmanager/config"\n
Apr 17 18:36:56.382 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-146.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/04/17 18:23:02 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/17 18:23:02 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/17 18:23:02 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/17 18:23:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/17 18:23:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/17 18:23:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/17 18:23:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/17 18:23:03 http.go:107: HTTPS: listening on [::]:9095\nI0417 18:23:03.005416       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 17 18:36:58.859 E ns/openshift-service-ca-operator pod/service-ca-operator-7c649c46fd-2h7pg node/ip-10-0-141-119.us-west-2.compute.internal container=operator container exited with code 255 (Error): 
Apr 17 18:37:03.334 E ns/openshift-controller-manager pod/controller-manager-w8mvj node/ip-10-0-134-171.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): stream error: stream ID 37; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:32:38.480332       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 147; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:32:57.028446       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 151; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:32:57.028531       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 145; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:32:57.029195       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 21; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:32:57.032685       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 185; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:32:57.032822       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 197; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 17 18:37:06.515 E ns/openshift-monitoring pod/prometheus-adapter-6ccbfbf476-zg87t node/ip-10-0-151-146.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0417 18:22:53.534319       1 adapter.go:93] successfully using in-cluster auth\nI0417 18:22:53.993681       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 17 18:37:16.384 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 17 18:37:17.936 E ns/openshift-monitoring pod/node-exporter-v8lbl node/ip-10-0-141-119.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -17T18:17:10Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-17T18:17:10Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 17 18:37:30.524 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-146.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-17T18:37:04.741Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-17T18:37:04.745Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-17T18:37:04.745Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-17T18:37:04.746Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-17T18:37:04.746Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-17T18:37:04.746Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-17T18:37:04.746Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-17T18:37:04.746Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-17T18:37:04.746Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-17T18:37:04.746Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-17T18:37:04.746Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-17T18:37:04.746Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-17T18:37:04.746Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-17T18:37:04.746Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-17T18:37:04.748Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-17T18:37:04.748Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-17
Apr 17 18:37:33.073 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-242.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/04/17 18:24:16 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 17 18:37:33.073 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-242.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/04/17 18:24:16 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/17 18:24:16 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/17 18:24:16 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/17 18:24:16 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/17 18:24:16 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/17 18:24:16 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/17 18:24:16 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/17 18:24:16 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/17 18:24:16 http.go:107: HTTPS: listening on [::]:9091\nI0417 18:24:16.692096       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/17 18:36:34 oauthproxy.go:774: basicauth: 10.128.0.70:43646 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/17 18:36:45 oauthproxy.go:774: basicauth: 10.131.0.20:39246 Authorization header does not start with 'Basic', skipping basic authentication\n
Apr 17 18:37:33.073 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-242.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-17T18:24:15.961460756Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-04-17T18:24:15.961625475Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-04-17T18:24:15.963564242Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-17T18:24:21.117939602Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 17 18:37:38.643 E ns/openshift-marketplace pod/redhat-marketplace-65bc648b44-s9zrq node/ip-10-0-151-146.us-west-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Apr 17 18:37:41.661 E ns/openshift-marketplace pod/redhat-operators-dc468ff4b-gbrrh node/ip-10-0-151-146.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Apr 17 18:37:49.495 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 17 18:37:54.230 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-242.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-17T18:37:39.586Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-17T18:37:39.592Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-17T18:37:39.592Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-17T18:37:39.593Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-17T18:37:39.593Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-17T18:37:39.593Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-17T18:37:39.593Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-17T18:37:39.593Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-17T18:37:39.593Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-17T18:37:39.593Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-17T18:37:39.593Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-17T18:37:39.593Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-17T18:37:39.593Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-17T18:37:39.593Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-17T18:37:39.594Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-17T18:37:39.594Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-17
Apr 17 18:38:05.535 E ns/openshift-marketplace pod/community-operators-c6f8c4859-9nl27 node/ip-10-0-139-211.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 17 18:38:06.617 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-scheduler container exited with code 255 (Error): o:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=23767&timeout=7m28s&timeoutSeconds=448&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:05.445170       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=21460&timeout=6m0s&timeoutSeconds=360&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:05.446298       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=21460&timeout=9m37s&timeoutSeconds=577&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:05.447370       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=29864&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:05.448378       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=28867&timeout=7m56s&timeoutSeconds=476&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:05.451629       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=29931&timeout=9m27s&timeoutSeconds=567&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0417 18:38:05.532530       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0417 18:38:05.532558       1 server.go:257] leaderelection lost\n
Apr 17 18:38:09.555 E ns/openshift-service-ca pod/service-ca-848f778467-h4g77 node/ip-10-0-134-171.us-west-2.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Apr 17 18:38:17.149 E ns/openshift-controller-manager pod/controller-manager-zplhx node/ip-10-0-141-119.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): I0417 18:37:15.665627       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0417 18:37:15.667732       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-tzfqnxlq/stable@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0417 18:37:15.667768       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-tzfqnxlq/stable@sha256:7291b8d33c03cf2f563efef5bc757e362782144d67258bba957d61fdccf2a48d"\nI0417 18:37:15.667871       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nI0417 18:37:15.667888       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\n
Apr 17 18:38:17.313 E ns/openshift-controller-manager pod/controller-manager-rntft node/ip-10-0-154-74.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): -op-tzfqnxlq/stable@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0417 18:37:03.952097       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-tzfqnxlq/stable@sha256:7291b8d33c03cf2f563efef5bc757e362782144d67258bba957d61fdccf2a48d"\nI0417 18:37:03.952187       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0417 18:37:03.952208       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nI0417 18:38:16.169001       1 leaderelection.go:252] successfully acquired lease openshift-controller-manager/openshift-master-controllers\nI0417 18:38:16.169380       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-controller-manager", Name:"openshift-master-controllers", UID:"d0a72b9b-ac12-4fc8-b3fd-e357b309d331", APIVersion:"v1", ResourceVersion:"30271", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' controller-manager-rntft became leader\nI0417 18:38:16.174706       1 controller_manager.go:144] Starting "openshift.io/serviceaccount-pull-secrets"\nI0417 18:38:16.197341       1 controller_manager.go:154] Started "openshift.io/serviceaccount-pull-secrets"\nI0417 18:38:16.197363       1 controller_manager.go:144] Starting "openshift.io/deployer"\nI0417 18:38:16.197494       1 docker_registry_service.go:143] Starting DockerRegistryServiceController controller\nI0417 18:38:16.197533       1 deleted_dockercfg_secrets.go:67] Starting DockercfgDeletedController controller\nI0417 18:38:16.197596       1 deleted_token_secrets.go:62] Starting DockercfgTokenDeletedController controller\nI0417 18:38:16.197632       1 create_dockercfg_secrets.go:207] Starting DockercfgController controller\nI0417 18:38:16.218135       1 controller_manager.go:154] Started "openshift.io/deployer"\nI0417 18:38:16.218157       1 controller_manager.go:144] Starting "openshift.io/templateinstance"\nI0417 18:38:16.218298       1 factory.go:73] Starting deployer controller\n
Apr 17 18:38:27.615 E ns/openshift-console pod/console-7d7d5c4ff4-pp5wr node/ip-10-0-134-171.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020-04-17T18:23:10Z cmd/main: cookies are secure!\n2020-04-17T18:23:10Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-17T18:23:20Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-17T18:23:30Z cmd/main: Binding to [::]:8443...\n2020-04-17T18:23:30Z cmd/main: using TLS\n
Apr 17 18:38:38.756 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): ailed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps?allowWatchBookmarks=true&resourceVersion=29996&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:37.509300       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=28213&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:37.510688       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Infrastructure: Get https://localhost:6443/apis/config.openshift.io/v1/infrastructures?allowWatchBookmarks=true&resourceVersion=23806&timeout=5m1s&timeoutSeconds=301&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:37.511409       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=28233&timeout=6m16s&timeoutSeconds=376&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:37.521694       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?allowWatchBookmarks=true&resourceVersion=28213&timeout=9m21s&timeoutSeconds=561&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0417 18:38:38.045219       1 leaderelection.go:288] failed to renew lease openshift-kube-apiserver/cert-regeneration-controller-lock: timed out waiting for the condition\nI0417 18:38:38.045263       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 97d81e2a-99fd-4bba-a305-67255082a346 stopped leading\nF0417 18:38:38.045354       1 leaderelection.go:67] leaderelection lost\n
Apr 17 18:39:38.383 E ns/openshift-sdn pod/sdn-controller-x8dlk node/ip-10-0-141-119.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0417 18:09:09.443071       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0417 18:12:56.984283       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\nE0417 18:16:15.039565       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 17 18:39:43.837 E ns/openshift-sdn pod/sdn-controller-dv487 node/ip-10-0-134-171.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0417 18:09:09.582139       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0417 18:16:15.050880       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 17 18:39:48.857 E ns/openshift-sdn pod/sdn-gh6c7 node/ip-10-0-134-171.us-west-2.compute.internal container=sdn container exited with code 255 (Error): cProxyRules took 46.166955ms\nI0417 18:38:55.880132    2062 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.134.171:6443 10.0.141.119:6443 10.0.154.74:6443]\nI0417 18:38:56.018119    2062 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:38:56.018143    2062 proxier.go:347] userspace syncProxyRules took 26.289661ms\nI0417 18:38:56.648401    2062 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.134.171:10259 10.0.141.119:10259 10.0.154.74:10259]\nI0417 18:38:56.774448    2062 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:38:56.774470    2062 proxier.go:347] userspace syncProxyRules took 25.600588ms\nI0417 18:39:07.558219    2062 pod.go:503] CNI_ADD openshift-kube-apiserver/revision-pruner-6-ip-10-0-134-171.us-west-2.compute.internal got IP 10.130.0.64, ofport 65\nI0417 18:39:10.829343    2062 pod.go:539] CNI_DEL openshift-kube-apiserver/revision-pruner-6-ip-10-0-134-171.us-west-2.compute.internal\nI0417 18:39:26.901180    2062 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:39:26.901199    2062 proxier.go:347] userspace syncProxyRules took 26.25206ms\nI0417 18:39:35.476732    2062 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.5:6443 10.130.0.5:6443]\nI0417 18:39:35.476758    2062 roundrobin.go:217] Delete endpoint 10.129.0.16:6443 for service "openshift-multus/multus-admission-controller:"\nI0417 18:39:35.602527    2062 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:39:35.602545    2062 proxier.go:347] userspace syncProxyRules took 28.71783ms\nI0417 18:39:42.308063    2062 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0417 18:39:47.835027    2062 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 17 18:39:50.614 E ns/openshift-sdn pod/sdn-controller-hh7dw node/ip-10-0-154-74.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error):    1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\nI0417 18:16:17.801465       1 vnids.go:115] Allocated netid 7086224 for namespace "openshift-console"\nI0417 18:16:18.394193       1 vnids.go:115] Allocated netid 1663620 for namespace "openshift-console-operator"\nI0417 18:16:29.245587       1 vnids.go:115] Allocated netid 11703584 for namespace "openshift-ingress"\nI0417 18:18:10.018931       1 subnets.go:149] Created HostSubnet ip-10-0-128-242.us-west-2.compute.internal (host: "ip-10-0-128-242.us-west-2.compute.internal", ip: "10.0.128.242", subnet: "10.129.2.0/23")\nI0417 18:24:31.549271       1 vnids.go:115] Allocated netid 4690626 for namespace "e2e-openshift-api-available-9522"\nI0417 18:24:31.556322       1 vnids.go:115] Allocated netid 5712526 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-9270"\nI0417 18:24:31.565938       1 vnids.go:115] Allocated netid 11596308 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-3853"\nI0417 18:24:31.578028       1 vnids.go:115] Allocated netid 13354961 for namespace "e2e-kubernetes-api-available-2126"\nI0417 18:24:31.596844       1 vnids.go:115] Allocated netid 1724956 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-885"\nI0417 18:24:31.614000       1 vnids.go:115] Allocated netid 4500769 for namespace "e2e-k8s-service-lb-available-8247"\nI0417 18:24:31.668776       1 vnids.go:115] Allocated netid 12630815 for namespace "e2e-k8s-sig-apps-deployment-upgrade-3143"\nI0417 18:24:31.682236       1 vnids.go:115] Allocated netid 12895631 for namespace "e2e-k8s-sig-apps-job-upgrade-1341"\nI0417 18:24:31.703722       1 vnids.go:115] Allocated netid 9065260 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-8660"\nI0417 18:24:31.714300       1 vnids.go:115] Allocated netid 1643638 for namespace "e2e-frontend-ingress-available-9899"\n
Apr 17 18:40:06.456 E ns/openshift-multus pod/multus-admission-controller-6sbs5 node/ip-10-0-141-119.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 17 18:40:09.479 E ns/openshift-sdn pod/sdn-h2nmf node/ip-10-0-141-119.us-west-2.compute.internal container=sdn container exited with code 255 (Error): xier.go:347] userspace syncProxyRules took 25.380716ms\nI0417 18:38:55.879187    2054 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.134.171:6443 10.0.141.119:6443 10.0.154.74:6443]\nI0417 18:38:56.013863    2054 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:38:56.013894    2054 proxier.go:347] userspace syncProxyRules took 26.257728ms\nI0417 18:38:56.647971    2054 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.134.171:10259 10.0.141.119:10259 10.0.154.74:10259]\nI0417 18:38:56.783900    2054 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:38:56.783918    2054 proxier.go:347] userspace syncProxyRules took 25.296502ms\nI0417 18:39:26.916533    2054 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:39:26.916556    2054 proxier.go:347] userspace syncProxyRules took 35.879313ms\nI0417 18:39:35.477997    2054 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.5:6443 10.130.0.5:6443]\nI0417 18:39:35.478099    2054 roundrobin.go:217] Delete endpoint 10.129.0.16:6443 for service "openshift-multus/multus-admission-controller:"\nI0417 18:39:35.618535    2054 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:39:35.618560    2054 proxier.go:347] userspace syncProxyRules took 25.932468ms\nI0417 18:40:05.770509    2054 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:40:05.770531    2054 proxier.go:347] userspace syncProxyRules took 34.585583ms\nI0417 18:40:05.876173    2054 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-6sbs5\nI0417 18:40:08.474062    2054 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0417 18:40:08.474107    2054 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 17 18:40:37.725 E ns/openshift-sdn pod/sdn-fwb85 node/ip-10-0-128-242.us-west-2.compute.internal container=sdn container exited with code 255 (Error): xier.go:368] userspace proxy: processing 0 service events\nI0417 18:38:43.498810    2292 proxier.go:347] userspace syncProxyRules took 28.837845ms\nI0417 18:38:55.877802    2292 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.134.171:6443 10.0.141.119:6443 10.0.154.74:6443]\nI0417 18:38:56.009666    2292 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:38:56.009695    2292 proxier.go:347] userspace syncProxyRules took 28.775629ms\nI0417 18:38:56.646746    2292 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.134.171:10259 10.0.141.119:10259 10.0.154.74:10259]\nI0417 18:38:56.794440    2292 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:38:56.794466    2292 proxier.go:347] userspace syncProxyRules took 29.540901ms\nI0417 18:39:26.934536    2292 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:39:26.934566    2292 proxier.go:347] userspace syncProxyRules took 28.675563ms\nI0417 18:39:35.475311    2292 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.5:6443 10.130.0.5:6443]\nI0417 18:39:35.475354    2292 roundrobin.go:217] Delete endpoint 10.129.0.16:6443 for service "openshift-multus/multus-admission-controller:"\nI0417 18:39:35.611860    2292 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:39:35.611894    2292 proxier.go:347] userspace syncProxyRules took 28.022397ms\nI0417 18:40:05.784714    2292 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:40:05.784747    2292 proxier.go:347] userspace syncProxyRules took 39.638337ms\nI0417 18:40:35.919537    2292 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:40:35.919565    2292 proxier.go:347] userspace syncProxyRules took 28.540395ms\nF0417 18:40:36.943810    2292 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 17 18:40:58.706 E ns/openshift-multus pod/multus-zkbvj node/ip-10-0-141-119.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 17 18:41:02.830 E ns/openshift-sdn pod/sdn-95b22 node/ip-10-0-154-74.us-west-2.compute.internal container=sdn container exited with code 255 (Error): 56.004123    2197 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:38:56.004138    2197 proxier.go:347] userspace syncProxyRules took 23.880066ms\nI0417 18:38:56.650834    2197 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.134.171:10259 10.0.141.119:10259 10.0.154.74:10259]\nI0417 18:38:56.768602    2197 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:38:56.768617    2197 proxier.go:347] userspace syncProxyRules took 23.341811ms\nI0417 18:39:26.880779    2197 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:39:26.880858    2197 proxier.go:347] userspace syncProxyRules took 25.529677ms\nI0417 18:39:35.478752    2197 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.5:6443 10.130.0.5:6443]\nI0417 18:39:35.478849    2197 roundrobin.go:217] Delete endpoint 10.129.0.16:6443 for service "openshift-multus/multus-admission-controller:"\nI0417 18:39:35.634607    2197 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:39:35.634624    2197 proxier.go:347] userspace syncProxyRules took 24.192825ms\nI0417 18:40:05.763452    2197 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:40:05.763469    2197 proxier.go:347] userspace syncProxyRules took 28.953103ms\nI0417 18:40:35.882890    2197 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:40:35.882906    2197 proxier.go:347] userspace syncProxyRules took 24.543658ms\nI0417 18:41:00.574753    2197 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0417 18:41:02.712060    2197 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0417 18:41:02.712089    2197 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 17 18:41:13.995 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-776ddcbdcd-l4jlr node/ip-10-0-154-74.us-west-2.compute.internal container=manager container exited with code 1 (Error): ine-api-ovirt secret=openshift-machine-api/ovirt-credentials\ntime="2020-04-17T18:36:47Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-ovirt secret=openshift-machine-api/ovirt-credentials\ntime="2020-04-17T18:36:47Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-ovirt secret=openshift-machine-api/ovirt-credentials\ntime="2020-04-17T18:36:47Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-04-17T18:36:47Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-04-17T18:36:47Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-04-17T18:36:47Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-04-17T18:36:47Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-04-17T18:36:47Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-04-17T18:36:48Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\ntime="2020-04-17T18:38:47Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-04-17T18:38:47Z" level=info msg="reconcile complete" controller=metrics elapsed=1.531474ms\ntime="2020-04-17T18:40:47Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-04-17T18:40:47Z" level=info msg="reconcile complete" controller=metrics elapsed=1.449584ms\ntime="2020-04-17T18:41:12Z" level=fatal msg="unable to run the manager" error="leader election lost"\n
Apr 17 18:41:15.084 - 29s   E openshift-apiserver OpenShift API is not responding to GET requests
Apr 17 18:41:42.062 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6fc77cf96f-4v7tv node/ip-10-0-139-211.us-west-2.compute.internal container=snapshot-controller container exited with code 255 (Error): 
Apr 17 18:41:45.097 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-776ddcbdcd-l4jlr node/ip-10-0-154-74.us-west-2.compute.internal container=manager container exited with code 1 (Error): Copying system trust bundle\ntime="2020-04-17T18:41:14Z" level=debug msg="debug logging enabled"\ntime="2020-04-17T18:41:14Z" level=info msg="setting up client for manager"\ntime="2020-04-17T18:41:14Z" level=info msg="setting up manager"\ntime="2020-04-17T18:41:44Z" level=fatal msg="unable to set up overall controller manager" error="Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: i/o timeout"\n
Apr 17 18:41:52.270 E ns/openshift-sdn pod/sdn-fq6zc node/ip-10-0-151-146.us-west-2.compute.internal container=sdn container exited with code 255 (Error): cco-metrics:cco-metrics\nI0417 18:41:14.100076   87283 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:41:14.100096   87283 proxier.go:347] userspace syncProxyRules took 29.285378ms\nI0417 18:41:14.237222   87283 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:41:14.237242   87283 proxier.go:347] userspace syncProxyRules took 29.307013ms\nI0417 18:41:14.966845   87283 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/controller-manager-service: to [10.128.0.67:443]\nI0417 18:41:14.966949   87283 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics to [10.128.0.67:2112]\nI0417 18:41:15.114828   87283 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:41:15.114853   87283 proxier.go:347] userspace syncProxyRules took 28.978356ms\nI0417 18:41:15.245534   87283 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:41:15.245553   87283 proxier.go:347] userspace syncProxyRules took 28.749433ms\nI0417 18:41:45.061625   87283 roundrobin.go:295] LoadBalancerRR: Removing endpoints for openshift-cloud-credential-operator/controller-manager-service:\nI0417 18:41:45.073768   87283 roundrobin.go:295] LoadBalancerRR: Removing endpoints for openshift-cloud-credential-operator/cco-metrics:cco-metrics\nI0417 18:41:45.201478   87283 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:41:45.201500   87283 proxier.go:347] userspace syncProxyRules took 29.68773ms\nI0417 18:41:45.344700   87283 proxier.go:368] userspace proxy: processing 0 service events\nI0417 18:41:45.344724   87283 proxier.go:347] userspace syncProxyRules took 29.895255ms\nI0417 18:41:52.087532   87283 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0417 18:41:52.087582   87283 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 17 18:42:23.238 E ns/openshift-multus pod/multus-admission-controller-jnv8b node/ip-10-0-154-74.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 17 18:42:38.373 E ns/openshift-multus pod/multus-g7b57 node/ip-10-0-154-74.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 17 18:43:32.499 E ns/openshift-multus pod/multus-77dfc node/ip-10-0-134-171.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 17 18:44:25.642 E ns/openshift-multus pod/multus-fw4rm node/ip-10-0-151-146.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Apr 17 18:45:13.383 E ns/openshift-machine-config-operator pod/machine-config-operator-7857fcf478-f7lcj node/ip-10-0-141-119.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): :"", Name:"machine-config", UID:"8cfbbad2-f5b5-4e3a-aa6c-8789e36e69bf", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-04-17-172004}]\nE0417 18:09:52.398026       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0417 18:09:52.400134       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0417 18:09:53.400549       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0417 18:09:57.442034       1 sync.go:61] [init mode] synced RenderConfig in 5.421675084s\nI0417 18:09:57.755040       1 sync.go:61] [init mode] synced MachineConfigPools in 312.7798ms\nI0417 18:10:28.214055       1 sync.go:61] [init mode] synced MachineConfigDaemon in 30.458984836s\nI0417 18:10:33.250221       1 sync.go:61] [init mode] synced MachineConfigController in 5.036125635s\nI0417 18:10:38.296120       1 sync.go:61] [init mode] synced MachineConfigServer in 5.045863183s\nI0417 18:12:38.303414       1 sync.go:61] [init mode] synced RequiredPools in 2m0.007253405s\nI0417 18:12:38.334904       1 sync.go:85] Initialization complete\nE0417 18:16:15.066624       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Apr 17 18:47:08.697 E ns/openshift-machine-config-operator pod/machine-config-daemon-mzdfl node/ip-10-0-141-119.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 18:47:15.207 E ns/openshift-machine-config-operator pod/machine-config-daemon-lchrs node/ip-10-0-128-242.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 18:47:22.984 E ns/openshift-machine-config-operator pod/machine-config-daemon-mgbth node/ip-10-0-151-146.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 18:47:43.071 E ns/openshift-machine-config-operator pod/machine-config-daemon-2lfl9 node/ip-10-0-134-171.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 18:47:54.990 E ns/openshift-machine-config-operator pod/machine-config-daemon-fk8f5 node/ip-10-0-139-211.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 18:48:15.165 E ns/openshift-machine-config-operator pod/machine-config-controller-7cb899cc67-dcbnc node/ip-10-0-134-171.us-west-2.compute.internal container=machine-config-controller container exited with code 2 (Error): ontroller.go:452] Pool worker: node ip-10-0-151-146.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0417 18:16:18.503480       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0417 18:16:18.539302       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0417 18:16:33.056574       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0417 18:16:33.365313       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0417 18:20:01.260994       1 node_controller.go:452] Pool worker: node ip-10-0-128-242.us-west-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-cf5711928035fbf0ea19d5b7161bde3e\nI0417 18:20:01.261085       1 node_controller.go:452] Pool worker: node ip-10-0-128-242.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-cf5711928035fbf0ea19d5b7161bde3e\nI0417 18:20:01.261114       1 node_controller.go:452] Pool worker: node ip-10-0-128-242.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0417 18:25:12.167406       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0417 18:25:12.223369       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0417 18:31:27.338343       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0417 18:31:27.412586       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0417 18:35:33.630048       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0417 18:35:33.675724       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\n
Apr 17 18:49:47.125 E ns/openshift-machine-config-operator pod/machine-config-server-h2zrn node/ip-10-0-141-119.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0417 18:10:36.856338       1 start.go:38] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0417 18:10:36.857447       1 api.go:51] Launching server on :22624\nI0417 18:10:36.857572       1 api.go:51] Launching server on :22623\nI0417 18:11:54.526836       1 api.go:97] Pool worker requested by 10.0.139.126:4968\nI0417 18:15:49.946555       1 api.go:97] Pool worker requested by 10.0.148.188:58588\n
Apr 17 18:49:54.654 E ns/openshift-machine-config-operator pod/machine-config-server-jrjzk node/ip-10-0-154-74.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0417 18:12:18.190696       1 start.go:38] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0417 18:12:18.191560       1 api.go:51] Launching server on :22624\nI0417 18:12:18.191973       1 api.go:51] Launching server on :22623\n
Apr 17 18:49:57.450 E ns/openshift-machine-config-operator pod/machine-config-server-7zp4s node/ip-10-0-134-171.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0417 18:10:34.087828       1 start.go:38] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0417 18:10:34.088909       1 api.go:51] Launching server on :22624\nI0417 18:10:34.088948       1 api.go:51] Launching server on :22623\nI0417 18:11:53.677858       1 api.go:97] Pool worker requested by 10.0.148.188:65055\n
Apr 17 18:49:58.512 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-146.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/04/17 18:37:12 Watching directory: "/etc/alertmanager/config"\n
Apr 17 18:49:58.512 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-146.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/04/17 18:37:26 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/17 18:37:26 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/17 18:37:26 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/17 18:37:26 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/17 18:37:26 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/17 18:37:26 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/17 18:37:26 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/17 18:37:26 http.go:107: HTTPS: listening on [::]:9095\nI0417 18:37:26.190622       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 17 18:49:58.521 E ns/openshift-authentication-operator pod/authentication-operator-fcf75d494-b49ws node/ip-10-0-134-171.us-west-2.compute.internal container=operator container exited with code 255 (Error): ,"reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0417 18:41:20.568365       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"eed2d6ac-e4ed-4966-82d4-3ae9820d64d5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "OperatorSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io openshift-browser-client)"\nI0417 18:41:55.876250       1 status_controller.go:176] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-17T18:17:12Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-17T18:37:20Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-17T18:24:20Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-17T18:09:55Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0417 18:41:55.892927       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"eed2d6ac-e4ed-4966-82d4-3ae9820d64d5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "OperatorSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get oauthclients.oauth.openshift.io openshift-browser-client)" to ""\nI0417 18:49:57.285169       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0417 18:49:57.285938       1 builder.go:243] stopped\nI0417 18:49:57.286007       1 controller.go:215] Shutting down RouterCertsDomainValidationController\n
Apr 17 18:50:23.449 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-211.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-17T18:50:16.527Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-17T18:50:16.533Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-17T18:50:16.533Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-17T18:50:16.534Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-17T18:50:16.534Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-17T18:50:16.535Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-17T18:50:16.535Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-17
Apr 17 18:51:42.518 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-134-171.us-west-2.compute.internal" not ready since 2020-04-17 18:51:09 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-141-119.us-west-2.compute.internal,ip-10-0-134-171.us-west-2.compute.internal members are unhealthy,  members are unknown
Apr 17 18:52:12.384 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Apr 17 18:52:41.737 E ns/openshift-monitoring pod/node-exporter-bs6v7 node/ip-10-0-151-146.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:49:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:50:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:50:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:50:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:50:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:50:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:50:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 17 18:52:41.784 E ns/openshift-cluster-node-tuning-operator pod/tuned-9b5mt node/ip-10-0-151-146.us-west-2.compute.internal container=tuned container exited with code 143 (Error):  recommended profile...\nI0417 18:37:01.063797   75617 tuned.go:444] active profile () != recommended profile (openshift-node)\nI0417 18:37:01.063904   75617 tuned.go:461] tuned daemon profiles changed, forcing tuned daemon reload\nI0417 18:37:01.063981   75617 tuned.go:310] starting tuned...\n2020-04-17 18:37:01,216 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-17 18:37:01,224 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-17 18:37:01,224 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-17 18:37:01,225 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-17 18:37:01,226 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-17 18:37:01,265 INFO     tuned.daemon.controller: starting controller\n2020-04-17 18:37:01,265 INFO     tuned.daemon.daemon: starting tuning\n2020-04-17 18:37:01,278 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-17 18:37:01,279 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-17 18:37:01,283 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-17 18:37:01,285 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-17 18:37:01,286 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-17 18:37:01,419 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-17 18:37:01,440 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0417 18:50:29.418502   75617 tuned.go:513] profile "ip-10-0-151-146.us-west-2.compute.internal" changed, tuned profile requested: openshift-node\nI0417 18:50:29.881582   75617 tuned.go:417] getting recommended profile...\nI0417 18:50:29.999698   75617 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\n
Apr 17 18:52:41.818 E ns/openshift-multus pod/multus-kprhn node/ip-10-0-151-146.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 17 18:52:41.876 E ns/openshift-machine-config-operator pod/machine-config-daemon-dckcj node/ip-10-0-151-146.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 18:52:46.534 E ns/openshift-multus pod/multus-kprhn node/ip-10-0-151-146.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:52:47.092 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=cluster-policy-controller container exited with code 1 (Error): I0417 18:32:26.415037       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0417 18:32:26.416389       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0417 18:32:26.417903       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0417 18:32:26.417938       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0417 18:37:56.680598       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:13.268234       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:30.712064       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Apr 17 18:52:47.092 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 5725       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:49:53.396067       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:50:01.824239       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:50:01.824496       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:50:03.444154       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:50:03.444510       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:50:11.835865       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:50:11.836147       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:50:13.454485       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:50:13.454856       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:50:21.842801       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:50:21.843083       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:50:23.463458       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:50:23.463718       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 17 18:52:47.092 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-controller-manager container exited with code 2 (Error): t: connection refused\nE0417 18:38:21.778888       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:27.692687       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:29.509026       1 webhook.go:109] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:29.509052       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0417 18:38:33.586515       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:35.869474       1 webhook.go:109] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:35.869508       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0417 18:38:38.406748       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\n
Apr 17 18:52:47.092 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): W0417 18:32:07.657542       1 cmd.go:200] Using insecure, self-signed certificates\nI0417 18:32:07.657851       1 crypto.go:588] Generating new CA for cert-recovery-controller-signer@1587148327 cert, and key in /tmp/serving-cert-441852125/serving-signer.crt, /tmp/serving-cert-441852125/serving-signer.key\nI0417 18:32:08.323320       1 observer_polling.go:155] Starting file observer\nI0417 18:32:08.364641       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cert-recovery-controller-lock...\nE0417 18:37:57.311188       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:09.007287       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:38:23.109432       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nI0417 18:50:27.498642       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0417 18:50:27.498670       1 leaderelection.go:67] leaderelection lost\n
Apr 17 18:52:47.106 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:50:09.727993       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:50:09.728097       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:50:10.140182       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:50:10.140203       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:50:10.724399       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:50:10.724420       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:50:12.151171       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:50:12.151192       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:50:14.159406       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:50:14.159425       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:50:16.166516       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:50:16.166539       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:50:18.175733       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:50:18.175767       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:50:20.191540       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:50:20.191559       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:50:22.196907       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:50:22.196943       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:50:24.204480       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:50:24.204501       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 17 18:52:47.106 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-scheduler container exited with code 2 (Error): nsion-apiserver-authentication::client-ca-file"]: "kube-control-plane-signer" [] issuer="<self>" (2020-04-17 17:53:41 +0000 UTC to 2021-04-17 17:53:41 +0000 UTC (now=2020-04-17 18:38:43.109492194 +0000 UTC))\nI0417 18:38:43.109547       1 tlsconfig.go:179] loaded client CA [3/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file"]: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-17 17:53:41 +0000 UTC to 2021-04-17 17:53:41 +0000 UTC (now=2020-04-17 18:38:43.109537202 +0000 UTC))\nI0417 18:38:43.109581       1 tlsconfig.go:179] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-04-17 17:53:36 +0000 UTC to 2030-04-15 17:53:36 +0000 UTC (now=2020-04-17 18:38:43.109572138 +0000 UTC))\nI0417 18:38:43.109615       1 tlsconfig.go:179] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file"]: "kube-csr-signer_@1587146981" [] issuer="kubelet-signer" (2020-04-17 18:09:40 +0000 UTC to 2020-04-18 17:53:41 +0000 UTC (now=2020-04-17 18:38:43.109605383 +0000 UTC))\nI0417 18:38:43.109951       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1587146998" (2020-04-17 18:10:14 +0000 UTC to 2022-04-17 18:10:15 +0000 UTC (now=2020-04-17 18:38:43.109938252 +0000 UTC))\nI0417 18:38:43.110255       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1587148687" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587148686" (2020-04-17 17:38:06 +0000 UTC to 2021-04-17 17:38:06 +0000 UTC (now=2020-04-17 18:38:43.110241973 +0000 UTC))\n
Apr 17 18:52:47.145 E ns/openshift-monitoring pod/node-exporter-mkqhq node/ip-10-0-134-171.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -17T18:36:51Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-17T18:36:51Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 17 18:52:47.167 E ns/openshift-cluster-node-tuning-operator pod/tuned-fvnct node/ip-10-0-134-171.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 169] disabling system tuned...\nI0417 18:38:18.985343   86073 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0417 18:38:19.093208   86073 tuned.go:258] recommended tuned profile openshift-control-plane content changed\nI0417 18:38:19.964490   86073 tuned.go:417] getting recommended profile...\nI0417 18:38:20.066050   86073 tuned.go:444] active profile () != recommended profile (openshift-control-plane)\nI0417 18:38:20.066152   86073 tuned.go:461] tuned daemon profiles changed, forcing tuned daemon reload\nI0417 18:38:20.066191   86073 tuned.go:310] starting tuned...\n2020-04-17 18:38:20,165 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-17 18:38:20,172 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-17 18:38:20,173 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-17 18:38:20,173 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-17 18:38:20,174 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-17 18:38:20,226 INFO     tuned.daemon.controller: starting controller\n2020-04-17 18:38:20,226 INFO     tuned.daemon.daemon: starting tuning\n2020-04-17 18:38:20,237 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-17 18:38:20,237 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-17 18:38:20,241 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-17 18:38:20,242 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-04-17 18:38:20,243 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-04-17 18:38:20,312 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-17 18:38:20,327 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Apr 17 18:52:47.178 E ns/openshift-controller-manager pod/controller-manager-79mnr node/ip-10-0-134-171.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): I0417 18:38:32.071560       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0417 18:38:32.072801       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-tzfqnxlq/stable@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0417 18:38:32.072818       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-tzfqnxlq/stable@sha256:7291b8d33c03cf2f563efef5bc757e362782144d67258bba957d61fdccf2a48d"\nI0417 18:38:32.072894       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0417 18:38:32.072904       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 17 18:52:47.190 E ns/openshift-sdn pod/sdn-controller-c2rs6 node/ip-10-0-134-171.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0417 18:39:48.977839       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 17 18:52:47.214 E ns/openshift-multus pod/multus-admission-controller-ph4kx node/ip-10-0-134-171.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 17 18:52:47.242 E ns/openshift-multus pod/multus-jzkr8 node/ip-10-0-134-171.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 17 18:52:47.262 E ns/openshift-machine-config-operator pod/machine-config-daemon-l4prq node/ip-10-0-134-171.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 18:52:47.319 E ns/openshift-machine-config-operator pod/machine-config-server-md84b node/ip-10-0-134-171.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0417 18:49:59.814198       1 start.go:38] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0417 18:49:59.815358       1 api.go:51] Launching server on :22624\nI0417 18:49:59.815464       1 api.go:51] Launching server on :22623\n
Apr 17 18:52:52.107 E ns/openshift-monitoring pod/node-exporter-mkqhq node/ip-10-0-134-171.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:52:52.291 E ns/openshift-multus pod/multus-jzkr8 node/ip-10-0-134-171.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:52:52.339 E ns/openshift-etcd pod/etcd-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-17 18:30:56.181434 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-134-171.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-134-171.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-17 18:30:56.182232 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-17 18:30:56.182595 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-134-171.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-134-171.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-17 18:30:56.184464 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/17 18:30:56 grpc: addrConn.createTransport failed to connect to {https://10.0.134.171:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.134.171:9978: connect: connection refused". Reconnecting...\nWARNING: 2020/04/17 18:30:57 grpc: addrConn.createTransport failed to connect to {https://10.0.134.171:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.134.171:9978: connect: connection refused". Reconnecting...\n
Apr 17 18:52:52.348 E ns/openshift-machine-config-operator pod/machine-config-daemon-dckcj node/ip-10-0-151-146.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 17 18:52:52.401 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): on has been compacted\nE0417 18:50:27.516067       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0417 18:50:27.516138       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0417 18:50:27.516173       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0417 18:50:27.516306       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0417 18:50:27.516325       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0417 18:50:27.516574       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0417 18:50:27.516651       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0417 18:50:27.516759       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0417 18:50:27.516977       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\nE0417 18:50:27.517307       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0417 18:50:27.517337       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0417 18:50:27.536393       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [10.0.141.119 10.0.154.74]\nI0417 18:50:27.548306       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n
Apr 17 18:52:52.401 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0417 18:36:52.685313       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 17 18:52:52.401 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0417 18:50:07.792830       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:50:07.793122       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0417 18:50:17.804835       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:50:17.805580       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 17 18:52:52.401 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-171.us-west-2.compute.internal node/ip-10-0-134-171.us-west-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0417 18:38:38.950124       1 cmd.go:200] Using insecure, self-signed certificates\nI0417 18:38:38.950346       1 crypto.go:588] Generating new CA for cert-regeneration-controller-signer@1587148718 cert, and key in /tmp/serving-cert-412875827/serving-signer.crt, /tmp/serving-cert-412875827/serving-signer.key\nI0417 18:38:39.654068       1 observer_polling.go:155] Starting file observer\nI0417 18:38:42.031159       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nI0417 18:50:27.199901       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0417 18:50:27.200024       1 leaderelection.go:67] leaderelection lost\n
Apr 17 18:52:55.673 E ns/openshift-multus pod/multus-jzkr8 node/ip-10-0-134-171.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:52:59.175 E ns/openshift-multus pod/multus-jzkr8 node/ip-10-0-134-171.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:53:02.073 E ns/openshift-machine-config-operator pod/machine-config-daemon-l4prq node/ip-10-0-134-171.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 17 18:53:06.486 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Apr 17 18:53:09.216 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-242.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/04/17 18:37:46 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 17 18:53:09.216 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-242.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/04/17 18:37:49 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/17 18:37:49 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/17 18:37:49 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/17 18:37:49 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/17 18:37:49 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/17 18:37:49 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/17 18:37:49 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/17 18:37:49 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/17 18:37:49 http.go:107: HTTPS: listening on [::]:9091\nI0417 18:37:49.804108       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/17 18:51:47 oauthproxy.go:774: basicauth: 10.131.0.20:56372 Authorization header does not start with 'Basic', skipping basic authentication\n
Apr 17 18:53:09.216 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-242.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-17T18:37:43.061792373Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-04-17T18:37:43.061997603Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-04-17T18:37:43.093607965Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-04-17T18:37:48.064106308Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-04-17T18:37:53.065315479Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-17T18:37:58.284147564Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 17 18:53:09.242 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-744d585994-rdjl7 node/ip-10-0-128-242.us-west-2.compute.internal container=operator container exited with code 255 (Error):  operator.go:145] Starting syncing operator at 2020-04-17 18:41:43.067147473 +0000 UTC m=+302.962139158\nI0417 18:41:43.067366       1 status_controller.go:176] clusteroperator/csi-snapshot-controller diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-17T18:15:39Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-17T18:41:43Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-17T18:41:43Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-17T18:16:16Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0417 18:41:43.072720       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-storage-operator", Name:"csi-snapshot-controller-operator", UID:"6568f27f-f415-4a00-93dd-5498b36fab3b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False (""),Available changed from False to True ("")\nI0417 18:41:43.086890       1 operator.go:147] Finished syncing operator at 19.735979ms\nI0417 18:50:29.548004       1 operator.go:145] Starting syncing operator at 2020-04-17 18:50:29.547987813 +0000 UTC m=+829.442979400\nI0417 18:50:29.608590       1 operator.go:147] Finished syncing operator at 60.589566ms\nI0417 18:53:07.523743       1 operator.go:145] Starting syncing operator at 2020-04-17 18:53:07.523726771 +0000 UTC m=+987.418718424\nI0417 18:53:07.630261       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0417 18:53:07.631001       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0417 18:53:07.631025       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0417 18:53:07.631041       1 logging_controller.go:93] Shutting down LogLevelController\nF0417 18:53:07.631110       1 builder.go:243] stopped\n
Apr 17 18:53:09.257 E ns/openshift-monitoring pod/prometheus-adapter-6f6fd4bd5f-tdqbv node/ip-10-0-128-242.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0417 18:36:58.082488       1 adapter.go:93] successfully using in-cluster auth\nI0417 18:36:59.126965       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 17 18:53:10.298 E ns/openshift-monitoring pod/kube-state-metrics-57bbb95665-4mkx2 node/ip-10-0-128-242.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 17 18:53:18.448 E ns/openshift-machine-config-operator pod/machine-config-operator-77868b6769-5cf82 node/ip-10-0-141-119.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): nfig...\nE0417 18:47:07.065440       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"44541540-4b03-42c3-b218-f9926d8d497c", ResourceVersion:"34374", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722743791, loc:(*time.Location)(0x27f8000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-77868b6769-5cf82_63bb0650-7600-4be5-9832-3fa600d0da88\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-04-17T18:47:07Z\",\"renewTime\":\"2020-04-17T18:47:07Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-77868b6769-5cf82_63bb0650-7600-4be5-9832-3fa600d0da88 became leader'\nI0417 18:47:07.065530       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0417 18:47:07.489679       1 operator.go:264] Starting MachineConfigOperator\nI0417 18:47:07.494091       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"8cfbbad2-f5b5-4e3a-aa6c-8789e36e69bf", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-04-17-172004}] to [{operator 0.0.1-2020-04-17-172224}]\n
Apr 17 18:53:20.598 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7cfd5ddfcc-t5q8m node/ip-10-0-141-119.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): ror: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-134-171.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal container=\"cluster-policy-controller\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-134-171.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal container=\"kube-controller-manager\" is not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-134-171.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal container=\"cluster-policy-controller\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-134-171.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal container=\"kube-controller-manager\" is not ready"\nI0417 18:53:17.358576       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"7c244da1-9bfd-4879-807c-2670bcc6e984", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-134-171.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal container=\"cluster-policy-controller\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-134-171.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-134-171.us-west-2.compute.internal container=\"kube-controller-manager\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0417 18:53:18.974192       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0417 18:53:18.974622       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0417 18:53:18.974644       1 builder.go:209] server exited\n
Apr 17 18:53:24.655 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-151-146.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-17T18:53:22.582Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-17T18:53:22.586Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-17T18:53:22.596Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-17T18:53:22.597Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-17T18:53:22.597Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-17T18:53:22.597Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-17T18:53:22.597Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-17T18:53:22.597Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-17T18:53:22.597Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-17T18:53:22.597Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-17T18:53:22.597Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-17T18:53:22.597Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-17T18:53:22.597Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-17T18:53:22.597Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-17T18:53:22.598Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-17T18:53:22.598Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-17
Apr 17 18:53:31.039 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-6fc77cf96f-4v7tv node/ip-10-0-139-211.us-west-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Apr 17 18:54:10.452 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Apr 17 18:55:56.532 E ns/openshift-monitoring pod/node-exporter-v8mhm node/ip-10-0-128-242.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:53:19Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:53:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:53:34Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:53:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:53:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:54:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:54:04Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 17 18:55:56.556 E ns/openshift-cluster-node-tuning-operator pod/tuned-jsn9n node/ip-10-0-128-242.us-west-2.compute.internal container=tuned container exited with code 143 (Error): cond(s)\n2020-04-17 18:37:26,613 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-17 18:37:26,613 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-17 18:37:26,614 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-17 18:37:26,673 INFO     tuned.daemon.controller: starting controller\n2020-04-17 18:37:26,673 INFO     tuned.daemon.daemon: starting tuning\n2020-04-17 18:37:26,688 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-17 18:37:26,689 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-17 18:37:26,693 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-17 18:37:26,695 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-17 18:37:26,697 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-17 18:37:26,834 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-17 18:37:26,844 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0417 18:53:56.121169   52834 tuned.go:554] tuned "rendered" changed\nI0417 18:53:56.121227   52834 tuned.go:224] extracting tuned profiles\nI0417 18:53:56.121236   52834 tuned.go:417] getting recommended profile...\nI0417 18:53:56.156845   52834 tuned.go:513] profile "ip-10-0-128-242.us-west-2.compute.internal" changed, tuned profile requested: openshift-node\nI0417 18:53:56.238335   52834 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0417 18:53:56.333873   52834 tuned.go:417] getting recommended profile...\nI0417 18:53:56.454640   52834 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\n2020-04-17 18:54:07,819 INFO     tuned.daemon.controller: terminating controller\n2020-04-17 18:54:07,820 INFO     tuned.daemon.daemon: stopping tuning\n
Apr 17 18:55:56.588 E ns/openshift-multus pod/multus-kf7gt node/ip-10-0-128-242.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 17 18:55:56.620 E ns/openshift-machine-config-operator pod/machine-config-daemon-2mmcp node/ip-10-0-128-242.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 18:56:01.251 E ns/openshift-sdn pod/sdn-m7q2l node/ip-10-0-128-242.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:56:01.255 E ns/openshift-multus pod/multus-kf7gt node/ip-10-0-128-242.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:56:04.129 E ns/openshift-multus pod/multus-kf7gt node/ip-10-0-128-242.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:56:07.151 E ns/openshift-machine-config-operator pod/machine-config-daemon-2mmcp node/ip-10-0-128-242.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 17 18:56:24.656 E ns/openshift-marketplace pod/community-operators-7c59fd696b-48jz8 node/ip-10-0-139-211.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Apr 17 18:56:24.722 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-211.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-17T18:50:16.527Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-17T18:50:16.533Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-17T18:50:16.533Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-17T18:50:16.534Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-17T18:50:16.534Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-17T18:50:16.534Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-17T18:50:16.535Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-17T18:50:16.535Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-17
Apr 17 18:56:24.722 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-211.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-17T18:50:21.148055903Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-04-17T18:50:21.148212858Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-04-17T18:50:21.150441079Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-17T18:50:26.303224224Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 17 18:56:24.751 E ns/openshift-marketplace pod/certified-operators-78d8887497-8p8p6 node/ip-10-0-139-211.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Apr 17 18:56:25.769 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-139-211.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/04/17 18:50:10 Watching directory: "/etc/alertmanager/config"\n
Apr 17 18:56:25.769 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-139-211.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/04/17 18:50:10 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/17 18:50:10 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/17 18:50:10 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/17 18:50:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/17 18:50:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/17 18:50:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/17 18:50:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0417 18:50:10.605259       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/17 18:50:10 http.go:107: HTTPS: listening on [::]:9095\n
Apr 17 18:56:33.232 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-141-119.us-west-2.compute.internal" not ready since 2020-04-17 18:54:30 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Apr 17 18:56:33.234 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-141-119.us-west-2.compute.internal" not ready since 2020-04-17 18:54:30 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Apr 17 18:56:33.239 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-141-119.us-west-2.compute.internal" not ready since 2020-04-17 18:54:30 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Apr 17 18:56:33.243 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-141-119.us-west-2.compute.internal" not ready since 2020-04-17 18:54:30 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-141-119.us-west-2.compute.internal members are unhealthy,  members are unknown
Apr 17 18:56:35.546 E ns/openshift-etcd pod/etcd-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-17 18:30:21.650563 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-141-119.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-141-119.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-17 18:30:21.651462 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-17 18:30:21.651971 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-141-119.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-141-119.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/17 18:30:21 grpc: addrConn.createTransport failed to connect to {https://10.0.141.119:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.141.119:9978: connect: connection refused". Reconnecting...\n2020-04-17 18:30:21.653857 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/17 18:30:22 grpc: addrConn.createTransport failed to connect to {https://10.0.141.119:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.141.119:9978: connect: connection refused". Reconnecting...\n
Apr 17 18:56:35.572 E ns/openshift-monitoring pod/node-exporter-kbkvx node/ip-10-0-141-119.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -17T18:37:24Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-17T18:37:24Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 17 18:56:35.584 E ns/openshift-controller-manager pod/controller-manager-4skc2 node/ip-10-0-141-119.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): I0417 18:38:28.840980       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0417 18:38:28.842350       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-tzfqnxlq/stable@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0417 18:38:28.842367       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-tzfqnxlq/stable@sha256:7291b8d33c03cf2f563efef5bc757e362782144d67258bba957d61fdccf2a48d"\nI0417 18:38:28.842454       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nI0417 18:38:28.842457       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\n
Apr 17 18:56:35.607 E ns/openshift-cluster-node-tuning-operator pod/tuned-z4gcp node/ip-10-0-141-119.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 89593 tuned.go:444] active profile () != recommended profile (openshift-control-plane)\nI0417 18:38:41.983307   89593 tuned.go:461] tuned daemon profiles changed, forcing tuned daemon reload\nI0417 18:38:41.983339   89593 tuned.go:310] starting tuned...\n2020-04-17 18:38:42,096 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-17 18:38:42,105 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-17 18:38:42,106 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-17 18:38:42,106 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-17 18:38:42,107 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-17 18:38:42,147 INFO     tuned.daemon.controller: starting controller\n2020-04-17 18:38:42,147 INFO     tuned.daemon.daemon: starting tuning\n2020-04-17 18:38:42,158 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-17 18:38:42,158 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-17 18:38:42,161 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-17 18:38:42,164 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-04-17 18:38:42,165 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-04-17 18:38:42,234 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-17 18:38:42,244 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0417 18:50:29.422973   89593 tuned.go:513] profile "ip-10-0-141-119.us-west-2.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0417 18:50:29.876886   89593 tuned.go:417] getting recommended profile...\nI0417 18:50:30.003170   89593 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Apr 17 18:56:35.617 E ns/openshift-sdn pod/sdn-controller-s6jt7 node/ip-10-0-141-119.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0417 18:39:43.283429       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 17 18:56:35.636 E ns/openshift-multus pod/multus-admission-controller-cczk5 node/ip-10-0-141-119.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Apr 17 18:56:35.664 E ns/openshift-multus pod/multus-6zb59 node/ip-10-0-141-119.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 17 18:56:35.694 E ns/openshift-machine-config-operator pod/machine-config-daemon-49p46 node/ip-10-0-141-119.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 18:56:35.711 E ns/openshift-machine-config-operator pod/machine-config-server-cmvcw node/ip-10-0-141-119.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0417 18:49:53.673674       1 start.go:38] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0417 18:49:53.674540       1 api.go:51] Launching server on :22624\nI0417 18:49:53.674610       1 api.go:51] Launching server on :22623\n
Apr 17 18:56:39.669 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0417 18:34:26.035286       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 17 18:56:39.669 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0417 18:53:38.020423       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:53:38.020690       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0417 18:53:48.027562       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:53:48.027915       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 17 18:56:39.669 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0417 18:34:25.775414       1 cmd.go:200] Using insecure, self-signed certificates\nI0417 18:34:25.775688       1 crypto.go:588] Generating new CA for cert-regeneration-controller-signer@1587148465 cert, and key in /tmp/serving-cert-114731838/serving-signer.crt, /tmp/serving-cert-114731838/serving-signer.key\nI0417 18:34:26.923420       1 observer_polling.go:155] Starting file observer\nI0417 18:34:26.943094       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nE0417 18:35:42.398403       1 leaderelection.go:331] error retrieving resource lock openshift-kube-apiserver/cert-regeneration-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:55.507841       1 leaderelection.go:331] error retrieving resource lock openshift-kube-apiserver/cert-regeneration-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nI0417 18:53:53.882899       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0417 18:53:53.883023       1 leaderelection.go:67] leaderelection lost\n
Apr 17 18:56:41.136 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=cluster-policy-controller container exited with code 1 (Error): I0417 18:35:31.203853       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0417 18:35:31.210650       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0417 18:35:31.210730       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0417 18:35:31.213156       1 cert_rotation.go:137] Starting client certificate rotation controller\nE0417 18:35:42.137484       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:53.661575       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Apr 17 18:56:41.136 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 1 (Error): _amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=26032&timeoutSeconds=581&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:58.130514       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=25846&timeout=7m8s&timeoutSeconds=428&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:58.131564       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=24408&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:58.135606       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=24408&timeout=8m27s&timeoutSeconds=507&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:58.144945       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=25846&timeout=5m10s&timeoutSeconds=310&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0417 18:53:53.850815       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0417 18:53:53.851209       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\nI0417 18:53:53.851275       1 csrcontroller.go:100] Shutting down CSR controller\nI0417 18:53:53.851293       1 csrcontroller.go:102] CSR controller shut down\nI0417 18:53:53.851268       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\n
Apr 17 18:56:41.136 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 7235       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:53:23.617479       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:53:32.004835       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:53:32.005082       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:53:33.623888       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:53:33.624257       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:53:42.013998       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:53:42.014259       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:53:43.630146       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:53:43.630443       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:53:52.022895       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:53:52.023201       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:53:53.636724       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:53:53.636990       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 17 18:56:41.136 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-controller-manager container exited with code 2 (Error): ock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:38.644157       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:44.207860       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:50.224294       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:53.955380       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:54.771216       1 webhook.go:109] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0417 18:35:54.771262       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0417 18:36:02.552258       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Apr 17 18:56:42.274 E ns/openshift-multus pod/multus-6zb59 node/ip-10-0-141-119.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:56:42.313 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:53:34.520234       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:53:34.520255       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:53:36.526419       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:53:36.526447       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:53:38.532669       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:53:38.532691       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:53:40.541902       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:53:40.542005       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:53:42.553955       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:53:42.553984       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:53:44.561102       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:53:44.561127       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:53:46.571377       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:53:46.571399       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:53:48.575845       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:53:48.575871       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:53:50.581735       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:53:50.581782       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:53:52.587590       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:53:52.587613       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 17 18:56:42.313 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-141-119.us-west-2.compute.internal node/ip-10-0-141-119.us-west-2.compute.internal container=kube-scheduler container exited with code 2 (Error): were unschedulable.; waiting\nE0417 18:53:41.738500       1 scheduling_queue.go:376] Unable to find backoff value for pod openshift-apiserver/apiserver-57fc668846-4s7x4 in backoffQ\nI0417 18:53:41.739385       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-57fc668846-4s7x4: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0417 18:53:42.071979       1 scheduler.go:751] pod openshift-authentication/oauth-openshift-859f48fdcc-xbgwf is bound successfully on node "ip-10-0-154-74.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0417 18:53:42.210831       1 scheduler.go:751] pod openshift-operator-lifecycle-manager/packageserver-5786fc7d56-b4sv5 is bound successfully on node "ip-10-0-154-74.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0417 18:53:43.740308       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-57fc668846-4s7x4: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0417 18:53:46.740227       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-57fc668846-4s7x4: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0417 18:53:49.740418       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6745db9c84-6h5lg: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0417 18:53:51.740875       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-57fc668846-4s7x4: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Apr 17 18:56:42.376 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-242.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-17T18:56:40.031Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-17T18:56:40.038Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-17T18:56:40.039Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-17T18:56:40.040Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-17T18:56:40.040Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-17T18:56:40.040Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-17T18:56:40.040Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-17T18:56:40.040Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-17T18:56:40.040Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-17T18:56:40.040Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-17T18:56:40.040Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-17T18:56:40.040Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-17T18:56:40.040Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-17T18:56:40.040Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-17T18:56:40.041Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-17T18:56:40.041Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-17
Apr 17 18:56:45.473 E ns/openshift-multus pod/multus-6zb59 node/ip-10-0-141-119.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:56:46.480 E ns/openshift-machine-config-operator pod/machine-config-daemon-49p46 node/ip-10-0-141-119.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 17 18:57:03.488 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-5bd75bdmg node/ip-10-0-154-74.us-west-2.compute.internal container=operator container exited with code 255 (Error): o/client-go@v0.17.1/tools/cache/reflector.go:105\nI0417 18:55:05.405983       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0417 18:55:05.405998       1 reflector.go:185] Listing and watching *v1.Namespace from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0417 18:55:05.406010       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0417 18:55:05.406022       1 reflector.go:185] Listing and watching *v1.Deployment from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0417 18:55:05.406043       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0417 18:55:06.565976       1 httplog.go:90] GET /metrics: (6.816342ms) 200 [Prometheus/2.15.2 10.128.2.16:45378]\nI0417 18:55:21.453790       1 httplog.go:90] GET /metrics: (4.637488ms) 200 [Prometheus/2.15.2 10.131.0.36:46024]\nI0417 18:55:36.565205       1 httplog.go:90] GET /metrics: (5.797945ms) 200 [Prometheus/2.15.2 10.128.2.16:45378]\nI0417 18:55:51.455187       1 httplog.go:90] GET /metrics: (5.992568ms) 200 [Prometheus/2.15.2 10.131.0.36:46024]\nI0417 18:56:06.565212       1 httplog.go:90] GET /metrics: (5.602701ms) 200 [Prometheus/2.15.2 10.128.2.16:45378]\nI0417 18:56:21.454044       1 httplog.go:90] GET /metrics: (4.945495ms) 200 [Prometheus/2.15.2 10.131.0.36:46024]\nI0417 18:56:36.590940       1 httplog.go:90] GET /metrics: (28.069041ms) 200 [Prometheus/2.15.2 10.128.2.16:45378]\nI0417 18:56:51.457171       1 httplog.go:90] GET /metrics: (5.228242ms) 200 [Prometheus/2.15.2 10.129.2.18:49040]\nI0417 18:57:02.443745       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0417 18:57:02.443934       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-controller-manager\nI0417 18:57:02.443983       1 operator.go:227] Shutting down ServiceCatalogControllerManagerOperator\nF0417 18:57:02.444008       1 builder.go:243] stopped\n
Apr 17 18:57:07.331 E ns/openshift-monitoring pod/thanos-querier-5cfcb7d866-dprhz node/ip-10-0-154-74.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/04/17 18:50:02 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/17 18:50:02 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/17 18:50:02 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/17 18:50:02 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/17 18:50:02 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/17 18:50:02 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/17 18:50:02 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/17 18:50:02 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0417 18:50:02.503063       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/17 18:50:02 http.go:107: HTTPS: listening on [::]:9091\n
Apr 17 18:57:09.022 E ns/openshift-service-ca-operator pod/service-ca-operator-56c7b8855b-lqg7j node/ip-10-0-154-74.us-west-2.compute.internal container=operator container exited with code 255 (Error): 
Apr 17 18:57:10.915 E ns/openshift-service-ca pod/service-ca-6555bbfdd6-ltlmx node/ip-10-0-154-74.us-west-2.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Apr 17 18:57:11.005 E ns/openshift-cluster-machine-approver pod/machine-approver-6684476bc4-cjt56 node/ip-10-0-154-74.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): 8:36:39.476161       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0417 18:36:39.476225       1 main.go:236] Starting Machine Approver\nI0417 18:36:39.576455       1 main.go:146] CSR csr-pdtxq added\nI0417 18:36:39.576479       1 main.go:149] CSR csr-pdtxq is already approved\nI0417 18:36:39.576495       1 main.go:146] CSR csr-2bwj6 added\nI0417 18:36:39.576501       1 main.go:149] CSR csr-2bwj6 is already approved\nI0417 18:36:39.576509       1 main.go:146] CSR csr-4ggbp added\nI0417 18:36:39.576516       1 main.go:149] CSR csr-4ggbp is already approved\nI0417 18:36:39.576525       1 main.go:146] CSR csr-8zdjw added\nI0417 18:36:39.576542       1 main.go:149] CSR csr-8zdjw is already approved\nI0417 18:36:39.576550       1 main.go:146] CSR csr-bvtdx added\nI0417 18:36:39.576555       1 main.go:149] CSR csr-bvtdx is already approved\nI0417 18:36:39.576566       1 main.go:146] CSR csr-c2sdl added\nI0417 18:36:39.576571       1 main.go:149] CSR csr-c2sdl is already approved\nI0417 18:36:39.576579       1 main.go:146] CSR csr-f4tx9 added\nI0417 18:36:39.576592       1 main.go:149] CSR csr-f4tx9 is already approved\nI0417 18:36:39.576606       1 main.go:146] CSR csr-6p4tp added\nI0417 18:36:39.576612       1 main.go:149] CSR csr-6p4tp is already approved\nI0417 18:36:39.576620       1 main.go:146] CSR csr-8kjkw added\nI0417 18:36:39.576626       1 main.go:149] CSR csr-8kjkw is already approved\nI0417 18:36:39.576634       1 main.go:146] CSR csr-r4h7b added\nI0417 18:36:39.576683       1 main.go:149] CSR csr-r4h7b is already approved\nI0417 18:36:39.576694       1 main.go:146] CSR csr-rkdkh added\nI0417 18:36:39.576700       1 main.go:149] CSR csr-rkdkh is already approved\nI0417 18:36:39.576710       1 main.go:146] CSR csr-rxfkq added\nI0417 18:36:39.576717       1 main.go:149] CSR csr-rxfkq is already approved\nW0417 18:53:54.999830       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 25209 (39983)\n
Apr 17 18:57:12.952 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-85f6cd55f4-nxcwp node/ip-10-0-154-74.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error):  container=\"kube-apiserver-insecure-readyz\" is terminated: \"Error\" - \"I0417 18:34:26.035286       1 readyz.go:103] Listening on 0.0.0.0:6080\\n\"" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-141-119.us-west-2.compute.internal\" not ready since 2020-04-17 18:56:35 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)"\nI0417 18:56:55.605729       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1f1fb15a-a733-4b3b-80d0-160e0f128254", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-141-119.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-141-119.us-west-2.compute.internal container=\"kube-apiserver\" is not ready")\nI0417 18:57:01.303916       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"1f1fb15a-a733-4b3b-80d0-160e0f128254", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-141-119.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-141-119.us-west-2.compute.internal container=\"kube-apiserver\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0417 18:57:11.265130       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0417 18:57:11.265285       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0417 18:57:11.265505       1 builder.go:209] server exited\n
Apr 17 18:57:12.999 E ns/openshift-etcd-operator pod/etcd-operator-bff6566bd-66vpq node/ip-10-0-154-74.us-west-2.compute.internal container=operator container exited with code 255 (Error): 09.734540       1 client.go:361] parsed scheme: "passthrough"\nI0417 18:57:09.734571       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://10.0.154.74:2379 0  <nil>}] <nil>}\nI0417 18:57:09.734580       1 clientconn.go:577] ClientConn switching balancer to "pick_first"\nI0417 18:57:09.734620       1 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc002ff53a0, CONNECTING\nI0417 18:57:09.735068       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0417 18:57:09.757115       1 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc002ff53a0, READY\nI0417 18:57:09.758577       1 client.go:361] parsed scheme: "passthrough"\nI0417 18:57:09.758689       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://10.0.134.171:2379 0  <nil>}] <nil>}\nI0417 18:57:09.758701       1 clientconn.go:577] ClientConn switching balancer to "pick_first"\nI0417 18:57:09.758771       1 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc0026f4500, CONNECTING\nI0417 18:57:09.758615       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0417 18:57:09.779887       1 balancer_conn_wrappers.go:127] pickfirstBalancer: HandleSubConnStateChange: 0xc0026f4500, READY\nI0417 18:57:09.789917       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0417 18:57:09.870152       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0417 18:57:09.871170       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0417 18:57:09.871263       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0417 18:57:09.871294       1 etcdmemberscontroller.go:192] Shutting down EtcdMembersController\nI0417 18:57:09.871341       1 base_controller.go:74] Shutting down LoggingSyncer ...\nF0417 18:57:09.871423       1 builder.go:243] stopped\n
Apr 17 18:57:36.303 E ns/openshift-console pod/console-78b97c9847-x7lrz node/ip-10-0-154-74.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020-04-17T18:37:53Z cmd/main: cookies are secure!\n2020-04-17T18:37:53Z cmd/main: Binding to [::]:8443...\n2020-04-17T18:37:53Z cmd/main: using TLS\n2020-04-17T18:41:30Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n2020-04-17T18:41:35Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n2020-04-17T18:41:44Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-04-17T18:54:23Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Apr 17 18:57:47.040 E kube-apiserver failed contacting the API: Get https://api.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=43739&timeout=6m27s&timeoutSeconds=387&watch=true: dial tcp 52.12.236.241:6443: connect: connection refused
Apr 17 18:58:04.477 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusUserWorkloadFailed: Failed to rollout the stack. Error: running task Updating Prometheus-user-workload failed: deleting UserWorkload serving certs CA Bundle ConfigMap failed: Delete https://172.30.0.1:443/api/v1/namespaces/openshift-user-workload-monitoring/configmaps/serving-certs-ca-bundle: read tcp 10.129.0.14:49548->172.30.0.1:443: read: connection reset by peer
Apr 17 18:58:34.969 E ns/openshift-marketplace pod/redhat-marketplace-79c9c4cfcb-8c4bh node/ip-10-0-151-146.us-west-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Apr 17 18:59:02.538 E ns/openshift-cluster-node-tuning-operator pod/tuned-c2s99 node/ip-10-0-139-211.us-west-2.compute.internal container=tuned container exited with code 143 (Error): MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-17 18:37:46,576 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-17 18:37:46,578 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-17 18:37:46,779 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-17 18:37:46,788 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0417 18:53:56.117385   65150 tuned.go:554] tuned "rendered" changed\nI0417 18:53:56.117413   65150 tuned.go:224] extracting tuned profiles\nI0417 18:53:56.117421   65150 tuned.go:417] getting recommended profile...\nI0417 18:53:56.149330   65150 tuned.go:513] profile "ip-10-0-139-211.us-west-2.compute.internal" changed, tuned profile requested: openshift-node\nI0417 18:53:56.236172   65150 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0417 18:53:57.062336   65150 tuned.go:417] getting recommended profile...\nI0417 18:53:57.195070   65150 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\nI0417 18:55:04.189798   65150 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0417 18:55:04.190000   65150 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0417 18:55:05.627504   65150 tuned.go:554] tuned "rendered" changed\nI0417 18:55:05.627536   65150 tuned.go:224] extracting tuned profiles\nI0417 18:55:05.627545   65150 tuned.go:417] getting recommended profile...\nI0417 18:55:05.650015   65150 tuned.go:513] profile "ip-10-0-139-211.us-west-2.compute.internal" changed, tuned profile requested: openshift-node\nI0417 18:55:05.762814   65150 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0417 18:55:06.062321   65150 tuned.go:417] getting recommended profile...\nI0417 18:55:06.176267   65150 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\n
Apr 17 18:59:02.560 E ns/openshift-monitoring pod/node-exporter-rmz2k node/ip-10-0-139-211.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:56:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:56:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:56:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:57:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:57:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:57:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-17T18:57:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 17 18:59:02.620 E ns/openshift-multus pod/multus-gt7n2 node/ip-10-0-139-211.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 17 18:59:02.633 E ns/openshift-machine-config-operator pod/machine-config-daemon-4vjf7 node/ip-10-0-139-211.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 18:59:06.688 E ns/openshift-multus pod/multus-gt7n2 node/ip-10-0-139-211.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 18:59:12.786 E ns/openshift-machine-config-operator pod/machine-config-daemon-4vjf7 node/ip-10-0-139-211.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 17 19:00:30.700 E ns/openshift-monitoring pod/node-exporter-nqdjs node/ip-10-0-154-74.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -17T18:37:03Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-17T18:37:03Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 17 19:00:30.709 E ns/openshift-controller-manager pod/controller-manager-wr624 node/ip-10-0-154-74.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): am: stream error: stream ID 679; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:57:07.984902       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 913; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:57:07.997628       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 809; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:57:07.997885       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 805; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:57:07.998038       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 681; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:57:07.998138       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 867; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:57:07.998226       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 807; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 17 19:00:30.721 E ns/openshift-cluster-node-tuning-operator pod/tuned-j76jt node/ip-10-0-154-74.us-west-2.compute.internal container=tuned container exited with code 143 (Error): oller: starting controller\n2020-04-17 18:38:47,645 INFO     tuned.daemon.daemon: starting tuning\n2020-04-17 18:38:47,654 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-17 18:38:47,654 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-17 18:38:47,657 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-17 18:38:47,658 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-04-17 18:38:47,660 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-04-17 18:38:47,744 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-17 18:38:47,752 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0417 18:50:29.426467   93013 tuned.go:513] profile "ip-10-0-154-74.us-west-2.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0417 18:50:30.398938   93013 tuned.go:417] getting recommended profile...\nI0417 18:50:30.537189   93013 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0417 18:57:34.572934   93013 tuned.go:513] profile "ip-10-0-154-74.us-west-2.compute.internal" changed, tuned profile requested: openshift-node\nI0417 18:57:34.597917   93013 tuned.go:513] profile "ip-10-0-154-74.us-west-2.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0417 18:57:35.398945   93013 tuned.go:417] getting recommended profile...\nI0417 18:57:35.561740   93013 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0417 18:57:46.512375   93013 tuned.go:114] received signal: terminated\nI0417 18:57:46.512476   93013 tuned.go:351] sending TERM to PID 93042\n2020-04-17 18:57:46,512 INFO     tuned.daemon.controller: terminating controller\n2020-04-17 18:57:46,512 INFO     tuned.daemon.daemon: stopping tuning\n
Apr 17 19:00:30.741 E ns/openshift-sdn pod/sdn-controller-swb2m node/ip-10-0-154-74.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0417 18:39:55.607270       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0417 18:39:55.636263       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"153d000d-6060-4f64-849f-9e3becbcb4d9", ResourceVersion:"31163", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722743696, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-154-74\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-17T18:08:16Z\",\"renewTime\":\"2020-04-17T18:39:55Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-154-74 became leader'\nI0417 18:39:55.636334       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0417 18:39:55.641562       1 master.go:51] Initializing SDN master\nI0417 18:39:55.668706       1 network_controller.go:61] Started OpenShift Network Controller\n
Apr 17 19:00:30.764 E ns/openshift-multus pod/multus-pvdnq node/ip-10-0-154-74.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Apr 17 19:00:30.785 E ns/openshift-machine-config-operator pod/machine-config-daemon-ldw7m node/ip-10-0-154-74.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 17 19:00:30.797 E ns/openshift-machine-config-operator pod/machine-config-server-2n7mh node/ip-10-0-154-74.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0417 18:49:56.564827       1 start.go:38] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0417 18:49:56.566189       1 api.go:51] Launching server on :22624\nI0417 18:49:56.571684       1 api.go:51] Launching server on :22623\n
Apr 17 19:00:30.807 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:57:28.370830       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:57:28.370948       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:57:30.387277       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:57:30.387360       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:57:32.393568       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:57:32.393590       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:57:34.408801       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:57:34.408895       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:57:36.423856       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:57:36.423944       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:57:38.432415       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:57:38.432507       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:57:40.442835       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:57:40.442857       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:57:42.451259       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:57:42.451283       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:57:44.461827       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:57:44.461852       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0417 18:57:46.470670       1 certsync_controller.go:65] Syncing configmaps: []\nI0417 18:57:46.470821       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 17 19:00:30.807 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-scheduler container exited with code 2 (Error): 79] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-04-17 17:53:36 +0000 UTC to 2030-04-15 17:53:36 +0000 UTC (now=2020-04-17 18:50:29.490445518 +0000 UTC))\nI0417 18:50:29.490478       1 tlsconfig.go:179] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1587146981" [] issuer="kubelet-signer" (2020-04-17 18:09:40 +0000 UTC to 2020-04-18 17:53:41 +0000 UTC (now=2020-04-17 18:50:29.490468615 +0000 UTC))\nI0417 18:50:29.490503       1 tlsconfig.go:179] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-04-17 17:53:39 +0000 UTC to 2020-04-18 17:53:39 +0000 UTC (now=2020-04-17 18:50:29.490494703 +0000 UTC))\nI0417 18:50:29.490881       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1587146998" (2020-04-17 18:10:14 +0000 UTC to 2022-04-17 18:10:15 +0000 UTC (now=2020-04-17 18:50:29.490867789 +0000 UTC))\nI0417 18:50:29.491121       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1587149429" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587149429" (2020-04-17 17:50:29 +0000 UTC to 2021-04-17 17:50:29 +0000 UTC (now=2020-04-17 18:50:29.491107757 +0000 UTC))\n
Apr 17 19:00:30.814 E ns/openshift-multus pod/multus-admission-controller-92gr6 node/ip-10-0-154-74.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Apr 17 19:00:30.827 E ns/openshift-etcd pod/etcd-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-17 18:31:26.018950 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-154-74.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-154-74.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-17 18:31:26.019560 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-17 18:31:26.020025 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-154-74.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-154-74.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/17 18:31:26 grpc: addrConn.createTransport failed to connect to {https://10.0.154.74:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.154.74:9978: connect: connection refused". Reconnecting...\n2020-04-17 18:31:26.022010 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/17 18:31:27 grpc: addrConn.createTransport failed to connect to {https://10.0.154.74:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.154.74:9978: connect: connection refused". Reconnecting...\n
Apr 17 19:00:35.785 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): 57.304021       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0417 18:38:57.304044       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0417 18:38:57.303712       1 client_cert_rotation_controller.go:140] Starting CertRotationController - "ServiceNetworkServing"\nI0417 18:38:57.306255       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - "ServiceNetworkServing"\nI0417 18:38:57.306265       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "ServiceNetworkServing"\nI0417 18:38:57.303722       1 client_cert_rotation_controller.go:140] Starting CertRotationController - "ExternalLoadBalancerServing"\nI0417 18:38:57.306294       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - "ExternalLoadBalancerServing"\nI0417 18:38:57.306300       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "ExternalLoadBalancerServing"\nI0417 18:48:57.205445       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0417 18:48:57.226406       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com\nI0417 18:53:56.051959       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0417 18:57:46.518440       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0417 18:57:46.518762       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\n
Apr 17 19:00:35.785 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): vice "kubernetes" to [10.0.134.171 10.0.141.119]\nI0417 18:57:46.557919       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0417 18:57:46.558329       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0417 18:57:46.558744       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0417 18:57:46.559151       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.154.74:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.154.74:2379: connect: connection refused". Reconnecting...\nW0417 18:57:46.559200       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.154.74:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.154.74:2379: connect: connection refused". Reconnecting...\nI0417 18:57:46.559305       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0417 18:57:46.559390       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nI0417 18:57:46.561151       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0417 18:57:46.583258       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick\nI0417 18:57:46.568226       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0417 18:57:46.568700       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick\nI0417 18:57:46.568718       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Apr 17 19:00:35.785 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0417 18:32:05.726322       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 17 19:00:35.785 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0417 18:57:28.275579       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:28.275939       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0417 18:57:38.284328       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:38.284752       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 17 19:00:35.803 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=cluster-policy-controller container exited with code 1 (Error): stream: stream error: stream ID 589; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:49:58.037720       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 695; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:53:13.229669       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 743; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:57:07.985884       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 917; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:57:07.986074       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 745; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:57:07.988539       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 879; INTERNAL_ERROR") has prevented the request from succeeding\nW0417 18:57:07.988715       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 915; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 17 19:00:35.803 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 5709       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:15.935957       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:22.363545       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:22.363833       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:25.943925       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:25.944218       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:32.369840       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:32.370070       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:35.958088       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:35.958392       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:42.376531       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:42.376999       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:45.967014       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:45.967562       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 17 19:00:35.803 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-controller-manager container exited with code 2 (Error): Created pod: redhat-marketplace-848465f8f4-h6jfc\nI0417 18:57:40.800917       1 endpoints_controller.go:340] Error syncing endpoints for service "openshift-marketplace/redhat-marketplace", retrying. Error: endpoints "redhat-marketplace" already exists\nI0417 18:57:40.801182       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"redhat-marketplace", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FailedToCreateEndpoint' Failed to create endpoint for service openshift-marketplace/redhat-marketplace: endpoints "redhat-marketplace" already exists\nI0417 18:57:41.547480       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-marketplace/redhat-operators-55dd6f5c48, need 1, creating 1\nI0417 18:57:41.547849       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-marketplace", Name:"redhat-operators", UID:"2ad9c654-e957-4963-b24e-c9ce0611fa31", APIVersion:"apps/v1", ResourceVersion:"43679", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redhat-operators-55dd6f5c48 to 1\nI0417 18:57:41.558680       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/redhat-operators: Operation cannot be fulfilled on deployments.apps "redhat-operators": the object has been modified; please apply your changes to the latest version and try again\nI0417 18:57:41.581428       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"redhat-operators-55dd6f5c48", UID:"3fe159a7-c651-401f-b571-a04eb96bea30", APIVersion:"apps/v1", ResourceVersion:"43682", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redhat-operators-55dd6f5c48-9zmq2\nI0417 18:57:43.392611       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/telemeter-client: Operation cannot be fulfilled on deployments.apps "telemeter-client": the object has been modified; please apply your changes to the latest version and try again\n
Apr 17 19:00:35.803 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal node/ip-10-0-154-74.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): W0417 18:33:19.394559       1 cmd.go:200] Using insecure, self-signed certificates\nI0417 18:33:19.394782       1 crypto.go:588] Generating new CA for cert-recovery-controller-signer@1587148399 cert, and key in /tmp/serving-cert-597692645/serving-signer.crt, /tmp/serving-cert-597692645/serving-signer.key\nI0417 18:33:19.584563       1 observer_polling.go:155] Starting file observer\nW0417 18:33:19.586557       1 builder.go:174] unable to get owner reference (falling back to namespace): Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/pods: dial tcp [::1]:6443: connect: connection refused\nI0417 18:33:19.588602       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cert-recovery-controller-lock...\nE0417 18:33:19.590012       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:33:36.385281       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: configmaps "cert-recovery-controller-lock" is forbidden: User "system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nI0417 18:57:46.532575       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0417 18:57:46.532615       1 leaderelection.go:67] leaderelection lost\n
Apr 17 19:00:37.006 E ns/openshift-multus pod/multus-pvdnq node/ip-10-0-154-74.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 19:00:40.217 E ns/openshift-multus pod/multus-pvdnq node/ip-10-0-154-74.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 19:00:41.256 E ns/openshift-machine-config-operator pod/machine-config-daemon-ldw7m node/ip-10-0-154-74.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 17 19:00:43.377 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady::StaticPods_Error: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-154-74.us-west-2.compute.internal" not ready since 2020-04-17 19:00:30 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal container="cluster-policy-controller" is not ready\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal container="cluster-policy-controller" is terminated: "Error" - "stream: stream error: stream ID 589; INTERNAL_ERROR\") has prevented the request from succeeding\nW0417 18:49:58.037720       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server (\"unable to decode an event from the watch stream: stream error: stream ID 695; INTERNAL_ERROR\") has prevented the request from succeeding\nW0417 18:53:13.229669       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server (\"unable to decode an event from the watch stream: stream error: stream ID 743; INTERNAL_ERROR\") has prevented the request from succeeding\nW0417 18:57:07.985884       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server (\"unable to decode an event from the watch stream: stream error: stream ID 917; INTERNAL_ERROR\") has prevented the request from succeeding\nW0417 18:57:07.986074       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server (\"unable to decode an event from the watch stream: stream error: stream ID 745; INTERNAL_ERROR\") has prevented the request from succeeding\nW0417 18:57:07.988539       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server (\"unable to decode an event from the watch stream: stream error: stream ID 879; INTERNAL_ERROR\") has prevented the request from succeeding\nW0417 18:57:07.988715       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server (\"unable to decode an event from the watch stream: stream error: stream ID 915; INTERNAL_ERROR\") has prevented the request from succeeding\n"\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal container="kube-controller-manager" is not ready\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal container="kube-controller-manager" is terminated: "Error" - "Created pod: redhat-marketplace-848465f8f4-h6jfc\nI0417 18:57:40.800917       1 endpoints_controller.go:340] Error syncing endpoints for service \"openshift-marketplace/redhat-marketplace\", retrying. Error: endpoints \"redhat-marketplace\" already exists\nI0417 18:57:40.801182       1 event.go:281] Event(v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"\", Name:\"redhat-marketplace\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedToCreateEndpoint' Failed to create endpoint for service openshift-marketplace/redhat-marketplace: endpoints \"redhat-marketplace\" already exists\nI0417 18:57:41.547480       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-marketplace/redhat-operators-55dd6f5c48, need 1, creating 1\nI0417 18:57:41.547849       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"openshift-marketplace\", Name:\"redhat-operators\", UID:\"2ad9c654-e957-4963-b24e-c9ce0611fa31\", APIVersion:\"apps/v1\", ResourceVersion:\"43679\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redhat-operators-55dd6f5c48 to 1\nI0417 18:57:41.558680       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/redhat-operators: Operation cannot be fulfilled on deployments.apps \"redhat-operators\": the object has been modified; please apply your changes to the latest version and try again\nI0417 18:57:41.581428       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"openshift-marketplace\", Name:\"redhat-operators-55dd6f5c48\", UID:\"3fe159a7-c651-401f-b571-a04eb96bea30\", APIVersion:\"apps/v1\", ResourceVersion:\"43682\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redhat-operators-55dd6f5c48-9zmq2\nI0417 18:57:43.392611       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/telemeter-client: Operation cannot be fulfilled on deployments.apps \"telemeter-client\": the object has been modified; please apply your changes to the latest version and try again\n"\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal container="kube-controller-manager-cert-syncer" is not ready\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal container="kube-controller-manager-cert-syncer" is terminated: "Error" - "5709       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:15.935957       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:22.363545       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:22.363833       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:25.943925       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:25.944218       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:32.369840       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:32.370070       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:35.958088       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:35.958392       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:42.376531       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:42.376999       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0417 18:57:45.967014       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:45.967562       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n"\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal container="kube-controller-manager-recovery-controller" is not ready\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-154-74.us-west-2.compute.internal container="kube-controller-manager-recovery-controller" is terminated: "Error" - "W0417 18:33:19.394559       1 cmd.go:200] Using insecure, self-signed certificates\nI0417 18:33:19.394782       1 crypto.go:588] Generating new CA for cert-recovery-controller-signer@1587148399 cert, and key in /tmp/serving-cert-597692645/serving-signer.crt, /tmp/serving-cert-597692645/serving-signer.key\nI0417 18:33:19.584563       1 observer_polling.go:155] Starting file observer\nW0417 18:33:19.586557       1 builder.go:174] unable to get owner reference (falling back to namespace): Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/pods: dial tcp [::1]:6443: connect: connection refused\nI0417 18:33:19.588602       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cert-recovery-controller-lock...\nE0417 18:33:19.590012       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nE0417 18:33:36.385281       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: configmaps \"cert-recovery-controller-lock\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"\nI0417 18:57:46.532575       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0417 18:57:46.532615       1 leaderelection.go:67] leaderelection lost\n"
Apr 17 19:00:43.387 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady::StaticPods_Error: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-154-74.us-west-2.compute.internal" not ready since 2020-04-17 19:00:30 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal container="kube-apiserver" is not ready\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal container="kube-apiserver" is terminated: "Error" - "vice \"kubernetes\" to [10.0.134.171 10.0.141.119]\nI0417 18:57:46.557919       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0417 18:57:46.558329       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0417 18:57:46.558744       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nW0417 18:57:46.559151       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.154.74:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.154.74:2379: connect: connection refused\". Reconnecting...\nW0417 18:57:46.559200       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.154.74:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.154.74:2379: connect: connection refused\". Reconnecting...\nI0417 18:57:46.559305       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nW0417 18:57:46.559390       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\". Reconnecting...\nI0417 18:57:46.561151       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0417 18:57:46.583258       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick\nI0417 18:57:46.568226       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\nI0417 18:57:46.568700       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick\nI0417 18:57:46.568718       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n"\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal container="kube-apiserver-cert-regeneration-controller" is not ready\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal container="kube-apiserver-cert-regeneration-controller" is terminated: "Error" - "57.304021       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - \"KubeAPIServerToKubeletClientCert\"\nI0417 18:38:57.304044       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - \"KubeAPIServerToKubeletClientCert\"\nI0417 18:38:57.303712       1 client_cert_rotation_controller.go:140] Starting CertRotationController - \"ServiceNetworkServing\"\nI0417 18:38:57.306255       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - \"ServiceNetworkServing\"\nI0417 18:38:57.306265       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - \"ServiceNetworkServing\"\nI0417 18:38:57.303722       1 client_cert_rotation_controller.go:140] Starting CertRotationController - \"ExternalLoadBalancerServing\"\nI0417 18:38:57.306294       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - \"ExternalLoadBalancerServing\"\nI0417 18:38:57.306300       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - \"ExternalLoadBalancerServing\"\nI0417 18:48:57.205445       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0417 18:48:57.226406       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-tzfqnxlq-1d6bd.origin-ci-int-aws.dev.rhcloud.com\nI0417 18:53:56.051959       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0417 18:57:46.518440       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0417 18:57:46.518762       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - \"ExternalLoadBalancerServing\"\n"\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal container="kube-apiserver-cert-syncer" is not ready\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal container="kube-apiserver-cert-syncer" is terminated: "Error" - "ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0417 18:57:28.275579       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:28.275939       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0417 18:57:38.284328       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0417 18:57:38.284752       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n"\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal container="kube-apiserver-insecure-readyz" is not ready\nStaticPodsDegraded: nodes/ip-10-0-154-74.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-154-74.us-west-2.compute.internal container="kube-apiserver-insecure-readyz" is terminated: "Error" - "I0417 18:32:05.726322       1 readyz.go:103] Listening on 0.0.0.0:6080\n"
Apr 17 19:00:43.401 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-154-74.us-west-2.compute.internal" not ready since 2020-04-17 19:00:30 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 17 19:00:44.247 E ns/openshift-multus pod/multus-pvdnq node/ip-10-0-154-74.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Apr 17 19:03:10.237 E clusterversion/version changed Failing to True: context deadline exceeded