ResultSUCCESS
Tests 4 failed / 21 succeeded
Started2020-04-20 16:47
Elapsed1h18m
Work namespaceci-op-ty1sl3vp
Refs release-4.4:d0260133
127:c9dd11dc
pod8d12c91d-8326-11ea-86dc-0a58ac10fc8e
repoopenshift/cluster-node-tuning-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 35m48s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 2s of 30m55s (0%):

Apr 20 17:38:13.716 E ns/e2e-k8s-service-lb-available-9575 svc/service-test Service stopped responding to GET requests on reused connections
Apr 20 17:38:13.750 I ns/e2e-k8s-service-lb-available-9575 svc/service-test Service started responding to GET requests on reused connections
Apr 20 17:39:27.716 E ns/e2e-k8s-service-lb-available-9575 svc/service-test Service stopped responding to GET requests on reused connections
Apr 20 17:39:27.751 I ns/e2e-k8s-service-lb-available-9575 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1587405625.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 34m48s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 6s of 34m47s (0%):

Apr 20 17:37:24.589 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Apr 20 17:37:25.588 - 4s    E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Apr 20 17:37:29.701 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Apr 20 17:38:37.588 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 20 17:38:37.641 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
				from junit_upgrade_1587405625.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 34m48s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1s of 34m47s (0%):

Apr 20 17:54:13.757 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 18.205.146.111:6443: connect: connection refused
Apr 20 17:54:14.721 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 20 17:54:14.773 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1587405625.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 35m50s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
202 error level events were detected during this test run:

Apr 20 17:24:53.345 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:25:33.578 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-scheduler container exited with code 255 (Error): lowWatchBookmarks=true&resourceVersion=13949&timeout=5m37s&timeoutSeconds=337&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:25:33.186008       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=22773&timeout=5m36s&timeoutSeconds=336&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:25:33.188427       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=22826&timeoutSeconds=597&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:25:33.192462       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=20555&timeout=9m59s&timeoutSeconds=599&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:25:33.193590       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=13940&timeout=5m53s&timeoutSeconds=353&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:25:33.198697       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=22764&timeout=5m54s&timeoutSeconds=354&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0420 17:25:33.391760       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0420 17:25:33.391793       1 server.go:257] leaderelection lost\n
Apr 20 17:25:33.603 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-controller-manager container exited with code 255 (Error): ager\nI0420 17:25:32.802137       1 dynamic_serving_content.go:145] Shutting down csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key\nI0420 17:25:32.802046       1 namespace_controller.go:212] Shutting down namespace controller\nI0420 17:25:32.802186       1 certificate_controller.go:130] Shutting down certificate controller "csrapproving"\nI0420 17:25:32.802256       1 serviceaccounts_controller.go:128] Shutting down service account controller\nI0420 17:25:32.801652       1 certificate_controller.go:130] Shutting down certificate controller "csrsigning"\nI0420 17:25:32.802340       1 replica_set.go:192] Shutting down replicaset controller\nI0420 17:25:32.802372       1 job_controller.go:155] Shutting down job controller\nI0420 17:25:32.802387       1 horizontal.go:167] Shutting down HPA controller\nI0420 17:25:32.802400       1 pv_controller_base.go:364] volume worker queue shutting down\nI0420 17:25:32.802401       1 stateful_set.go:157] Shutting down statefulset controller\nI0420 17:25:32.801896       1 pv_protection_controller.go:93] Shutting down PV protection controller\nI0420 17:25:32.802420       1 attach_detach_controller.go:378] Shutting down attach detach controller\nI0420 17:25:32.802433       1 clusterroleaggregation_controller.go:160] Shutting down ClusterRoleAggregator\nI0420 17:25:32.802449       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-140-162_752a5cc0-e971-4c86-aee6-f739c3a9aa14 stopped leading\nI0420 17:25:32.802486       1 horizontal.go:202] horizontal pod autoscaler controller worker shutting down\nF0420 17:25:32.802215       1 controllermanager.go:291] leaderelection lost\nE0420 17:25:32.847463       1 event.go:272] Unable to write event: 'Post https://localhost:6443/api/v1/namespaces/default/events: dial tcp [::1]:6443: connect: connection refused' (may retry after sleeping)\n
Apr 20 17:28:36.484 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-68b457459f" has successfully progressed.
Apr 20 17:29:23.127 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-777595bfd9-gjh5z node/ip-10-0-140-162.ec2.internal container=kube-scheduler-operator-container container exited with code 255 (Error): yment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"25f14b32-be0e-4a32-9298-973b2ea72bdb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-140-162.ec2.internal pods/openshift-kube-scheduler-ip-10-0-140-162.ec2.internal container=\"kube-scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0420 17:29:22.523586       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0420 17:29:22.524412       1 base_controller.go:74] Shutting down RevisionController ...\nI0420 17:29:22.524444       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0420 17:29:22.524459       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0420 17:29:22.524475       1 base_controller.go:74] Shutting down  ...\nI0420 17:29:22.524489       1 status_controller.go:212] Shutting down StatusSyncer-kube-scheduler\nI0420 17:29:22.524506       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0420 17:29:22.524520       1 base_controller.go:74] Shutting down NodeController ...\nI0420 17:29:22.524537       1 base_controller.go:74] Shutting down PruneController ...\nI0420 17:29:22.524551       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0420 17:29:22.524565       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0420 17:29:22.524579       1 base_controller.go:74] Shutting down InstallerController ...\nI0420 17:29:22.524594       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0420 17:29:22.524607       1 target_config_reconciler.go:126] Shutting down TargetConfigReconciler\nI0420 17:29:22.524620       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nF0420 17:29:22.524834       1 builder.go:243] stopped\n
Apr 20 17:29:47.223 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6595b8c88c-jcwjm node/ip-10-0-140-162.ec2.internal container=openshift-apiserver-operator container exited with code 255 (Error): /v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable"\nI0420 17:16:02.621175       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"a5a1d2d2-bc53-48e7-ac9e-e6456695325e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable" to ""\nI0420 17:16:02.625617       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"a5a1d2d2-bc53-48e7-ac9e-e6456695325e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable" to ""\nI0420 17:29:46.551873       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0420 17:29:46.551979       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0420 17:29:46.552052       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0420 17:29:46.552078       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0420 17:29:46.552383       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0420 17:29:46.551980       1 builder.go:210] server exited\nI0420 17:29:46.565350       1 secure_serving.go:222] Stopped listening on [::]:8443\n
Apr 20 17:29:52.585 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0420 17:29:51.367210       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0420 17:29:51.373323       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0420 17:29:51.377944       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0420 17:29:51.378707       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\nI0420 17:29:51.379193       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Apr 20 17:30:01.297 E ns/openshift-machine-api pod/machine-api-operator-65bf7cdc84-d2zcz node/ip-10-0-140-162.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Apr 20 17:30:13.798 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:30:13.831 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0420 17:30:13.363461       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0420 17:30:13.366862       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0420 17:30:13.368904       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0420 17:30:13.368985       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0420 17:30:13.369609       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 20 17:30:28.997 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:30:43.085 E kube-apiserver Kube API started failing: Get https://api.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 20 17:30:50.159 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:31:13.580 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0420 17:31:13.005219       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0420 17:31:13.008112       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0420 17:31:13.014687       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0420 17:31:13.014727       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0420 17:31:13.015397       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 20 17:31:30.741 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0420 17:31:29.850432       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0420 17:31:29.851593       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0420 17:31:29.854910       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0420 17:31:29.854968       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0420 17:31:29.866717       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 20 17:32:05.915 E ns/openshift-machine-api pod/machine-api-controllers-5497b5b9db-d4gf6 node/ip-10-0-140-162.ec2.internal container=controller-manager container exited with code 1 (Error): 
Apr 20 17:32:08.923 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-6cfb58d6c4-pn9mx node/ip-10-0-140-162.ec2.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): ): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: {"conditions":[{"type":"Degraded","status":"False","lastTransitionTime":"2020-04-20T17:09:19Z","reason":"AsExpected"},{"type":"Progressing","status":"False","lastTransitionTime":"2020-04-20T17:09:19Z","reason":"AsExpected"},{"type":"Available","status":"False","lastTransitionTime":"2020-04-20T17:09:19Z","reason":"_NoMigratorPod","message":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-04-20T17:09:18Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-04-20-164731"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0420 17:17:34.302789       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"b3da8b4f-4076-4ce2-9802-08c206996b45", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0420 17:32:08.355014       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0420 17:32:08.355080       1 leaderelection.go:66] leaderelection lost\n
Apr 20 17:32:20.005 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:32:40.096 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:32:49.144 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0420 17:32:48.343573       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0420 17:32:48.345732       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0420 17:32:48.349824       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0420 17:32:48.349995       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0420 17:32:48.356669       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 20 17:33:00.215 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0420 17:32:59.737938       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0420 17:32:59.739823       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0420 17:32:59.741562       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0420 17:32:59.741921       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0420 17:32:59.743305       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 20 17:33:12.267 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:33:36.455 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-scheduler container exited with code 255 (Error): eplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=26725&timeout=8m47s&timeoutSeconds=527&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:33:35.990452       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=25142&timeout=5m21s&timeoutSeconds=321&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:33:36.025168       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=25645&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:33:36.026138       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=25607&timeout=8m10s&timeoutSeconds=490&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:33:36.027749       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=25645&timeout=7m7s&timeoutSeconds=427&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:33:36.029640       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=27199&timeoutSeconds=348&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0420 17:33:36.079028       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0420 17:33:36.079062       1 server.go:257] leaderelection lost\n
Apr 20 17:33:38.413 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-controller-manager container exited with code 255 (Error): 7.130975       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRoleBinding: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=25144&timeout=9m58s&timeoutSeconds=598&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:33:37.132187       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/kubestorageversionmigrators?allowWatchBookmarks=true&resourceVersion=23044&timeout=5m35s&timeoutSeconds=335&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:33:37.133383       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/featuregates?allowWatchBookmarks=true&resourceVersion=23045&timeout=7m51s&timeoutSeconds=471&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:33:37.134592       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=27112&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:33:37.135511       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/snapshot.storage.k8s.io/v1beta1/volumesnapshotcontents?allowWatchBookmarks=true&resourceVersion=23039&timeout=9m30s&timeoutSeconds=570&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0420 17:33:37.651789       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0420 17:33:37.651887       1 controllermanager.go:291] leaderelection lost\n
Apr 20 17:33:47.466 E ns/openshift-cluster-machine-approver pod/machine-approver-5f78dfb66-jzpkx node/ip-10-0-140-162.ec2.internal container=machine-approver-controller container exited with code 2 (Error): sts?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0420 17:25:33.154771       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0420 17:25:34.156183       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0420 17:25:35.157915       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0420 17:25:36.158598       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0420 17:25:37.159704       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0420 17:25:38.161208       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Apr 20 17:34:14.856 E ns/openshift-monitoring pod/kube-state-metrics-796b7bff76-d5dfc node/ip-10-0-146-23.ec2.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 20 17:34:16.821 E ns/openshift-controller-manager pod/controller-manager-pjvk9 node/ip-10-0-140-162.ec2.internal container=controller-manager container exited with code 137 (Error): I0420 17:15:24.070531       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0420 17:15:24.072981       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-ty1sl3vp/stable-initial@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0420 17:15:24.073011       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-ty1sl3vp/stable-initial@sha256:7291b8d33c03cf2f563efef5bc757e362782144d67258bba957d61fdccf2a48d"\nI0420 17:15:24.073110       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0420 17:15:24.073170       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 20 17:34:25.978 E ns/openshift-monitoring pod/openshift-state-metrics-86b7977f46-ld57z node/ip-10-0-146-23.ec2.internal container=openshift-state-metrics container exited with code 2 (Error): 
Apr 20 17:34:28.438 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* deployment openshift-console/downloads is progressing ReplicaSetUpdated: ReplicaSet "downloads-5dd87f4d45" is progressing.\n* deployment openshift-operator-lifecycle-manager/catalog-operator is progressing ReplicaSetUpdated: ReplicaSet "catalog-operator-f7f845c4c" is progressing.
Apr 20 17:34:32.929 E ns/openshift-monitoring pod/thanos-querier-55dc9d7558-5b2gz node/ip-10-0-140-238.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/04/20 17:20:30 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/20 17:20:30 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/20 17:20:30 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/20 17:20:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/20 17:20:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/20 17:20:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/20 17:20:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/20 17:20:30 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/20 17:20:30 http.go:107: HTTPS: listening on [::]:9091\nI0420 17:20:30.369252       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 20 17:34:33.948 E ns/openshift-monitoring pod/grafana-8d759f454-t9hdd node/ip-10-0-140-238.ec2.internal container=grafana container exited with code 1 (Error): 
Apr 20 17:34:33.948 E ns/openshift-monitoring pod/grafana-8d759f454-t9hdd node/ip-10-0-140-238.ec2.internal container=grafana-proxy container exited with code 2 (Error): 
Apr 20 17:34:34.747 E ns/openshift-monitoring pod/node-exporter-2hlxb node/ip-10-0-142-75.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:33:42Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:33:46Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:33:57Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:12Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:27Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 20 17:34:53.116 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:34:56.234 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-146-23.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-20T17:34:50.583Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-20T17:34:50.592Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-20T17:34:50.592Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-20T17:34:50.593Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-20T17:34:50.593Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-20T17:34:50.593Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-20T17:34:50.594Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-20T17:34:50.594Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-20T17:34:50.594Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-20T17:34:50.594Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-20T17:34:50.594Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-20T17:34:50.594Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-20T17:34:50.594Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-20T17:34:50.594Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-20T17:34:50.594Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-20T17:34:50.594Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-20
Apr 20 17:34:59.964 E ns/openshift-monitoring pod/node-exporter-j57mm node/ip-10-0-154-231.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:33:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:22Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 20 17:35:08.212 E ns/openshift-monitoring pod/node-exporter-kg5zz node/ip-10-0-140-162.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:33:58Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:13Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:28Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:34:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 20 17:35:11.166 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-238.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-20T17:35:07.569Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-20T17:35:07.572Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-20T17:35:07.573Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-20T17:35:07.575Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-20T17:35:07.575Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-20T17:35:07.575Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-20T17:35:07.575Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-20T17:35:07.575Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-20T17:35:07.575Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-20T17:35:07.575Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-20T17:35:07.575Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-20T17:35:07.575Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-20T17:35:07.576Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-20T17:35:07.576Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-20T17:35:07.577Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-20T17:35:07.577Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-20
Apr 20 17:35:11.238 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:35:12.278 E ns/openshift-marketplace pod/certified-operators-759f558dd7-nzqgp node/ip-10-0-146-23.ec2.internal container=certified-operators container exited with code 2 (Error): 
Apr 20 17:35:31.113 E ns/openshift-console-operator pod/console-operator-7998886c7f-4t5nw node/ip-10-0-154-231.ec2.internal container=console-operator container exited with code 255 (Error): -164731\nE0420 17:21:42.828434       1 status.go:73] DeploymentAvailable FailedUpdate 2 replicas ready at version 0.0.1-2020-04-20-164731\nE0420 17:21:44.115929       1 status.go:73] SyncLoopRefreshProgressing InProgress Working toward version 0.0.1-2020-04-20-164731\nE0420 17:21:44.115960       1 status.go:73] DeploymentAvailable FailedUpdate 2 replicas ready at version 0.0.1-2020-04-20-164731\nI0420 17:22:14.974991       1 status_controller.go:176] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-20T17:15:14Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-20T17:22:14Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-20T17:22:14Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-20T17:15:14Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0420 17:22:14.990817       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"6b05f5d5-d95e-48b7-8de1-80ff35b52f75", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False (""),Available changed from False to True ("")\nW0420 17:30:15.553282       1 reflector.go:326] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 101; INTERNAL_ERROR") has prevented the request from succeeding\nI0420 17:35:30.273970       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0420 17:35:30.274526       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0420 17:35:30.274550       1 builder.go:210] server exited\nI0420 17:35:30.278450       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\n
Apr 20 17:35:33.402 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:36:09.545 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-scheduler container exited with code 255 (Error): tps://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=23026&timeout=7m45s&timeoutSeconds=465&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:08.194598       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=23028&timeout=9m26s&timeoutSeconds=566&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:08.195614       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=31359&timeoutSeconds=483&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:08.196782       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=23026&timeout=6m38s&timeoutSeconds=398&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:08.200386       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30685&timeout=9m16s&timeoutSeconds=556&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:08.204418       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=30859&timeout=8m42s&timeoutSeconds=522&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0420 17:36:08.774364       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0420 17:36:08.774400       1 server.go:257] leaderelection lost\n
Apr 20 17:36:10.566 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-controller-manager container exited with code 255 (Error): ue&resourceVersion=26180&timeout=5m18s&timeoutSeconds=318&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:10.370473       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:6443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=31363&timeout=5m40s&timeoutSeconds=340&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:10.371735       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: Get https://localhost:6443/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=27356&timeout=9m40s&timeoutSeconds=580&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:10.372782       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=23026&timeout=6m49s&timeoutSeconds=409&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:10.374083       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://localhost:6443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=23026&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:10.375223       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/images?allowWatchBookmarks=true&resourceVersion=23524&timeout=7m19s&timeoutSeconds=439&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0420 17:36:10.375370       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0420 17:36:10.375465       1 controllermanager.go:291] leaderelection lost\n
Apr 20 17:36:23.246 E ns/openshift-console pod/console-6bc9946559-jqrck node/ip-10-0-142-75.ec2.internal container=console container exited with code 2 (Error): 2020-04-20T17:21:18Z cmd/main: cookies are secure!\n2020-04-20T17:21:18Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-20T17:21:28Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-20T17:21:38Z cmd/main: Binding to [::]:8443...\n2020-04-20T17:21:38Z cmd/main: using TLS\n
Apr 20 17:37:02.447 E ns/openshift-sdn pod/sdn-controller-nbtw2 node/ip-10-0-154-231.ec2.internal container=sdn-controller container exited with code 2 (Error): 23")\nI0420 17:16:18.010631       1 subnets.go:149] Created HostSubnet ip-10-0-146-23.ec2.internal (host: "ip-10-0-146-23.ec2.internal", ip: "10.0.146.23", subnet: "10.128.2.0/23")\nI0420 17:16:19.223943       1 subnets.go:149] Created HostSubnet ip-10-0-140-238.ec2.internal (host: "ip-10-0-140-238.ec2.internal", ip: "10.0.140.238", subnet: "10.129.2.0/23")\nI0420 17:24:36.624613       1 vnids.go:115] Allocated netid 5006051 for namespace "e2e-k8s-service-lb-available-9575"\nI0420 17:24:36.638292       1 vnids.go:115] Allocated netid 10704361 for namespace "e2e-k8s-sig-apps-job-upgrade-1345"\nI0420 17:24:36.657401       1 vnids.go:115] Allocated netid 5349709 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-947"\nI0420 17:24:36.685181       1 vnids.go:115] Allocated netid 12127626 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-7852"\nI0420 17:24:36.724284       1 vnids.go:115] Allocated netid 12706056 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-2631"\nI0420 17:24:36.762310       1 vnids.go:115] Allocated netid 10701704 for namespace "e2e-kubernetes-api-available-8166"\nI0420 17:24:36.777743       1 vnids.go:115] Allocated netid 12015875 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-5951"\nI0420 17:24:36.866618       1 vnids.go:115] Allocated netid 935190 for namespace "e2e-frontend-ingress-available-6657"\nI0420 17:24:36.891984       1 vnids.go:115] Allocated netid 10079268 for namespace "e2e-k8s-sig-apps-deployment-upgrade-6356"\nI0420 17:24:36.907730       1 vnids.go:115] Allocated netid 8650940 for namespace "e2e-openshift-api-available-6948"\nE0420 17:25:22.123755       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.NetNamespace: Get https://api-int.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/netnamespaces?allowWatchBookmarks=true&resourceVersion=22174&timeout=7m44s&timeoutSeconds=464&watch=true: dial tcp 10.0.141.15:6443: connect: connection refused\n
Apr 20 17:37:12.510 E ns/openshift-sdn pod/sdn-bbwl2 node/ip-10-0-154-231.ec2.internal container=sdn container exited with code 255 (Error):  17:36:31.115097    2072 roundrobin.go:267] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.140.162:6443 10.0.142.75:6443 10.0.154.231:6443]\nI0420 17:36:31.259527    2072 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:36:31.259548    2072 proxier.go:347] userspace syncProxyRules took 28.015178ms\nI0420 17:36:35.420247    2072 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.140.162:6443 10.0.142.75:6443 10.0.154.231:6443]\nI0420 17:36:35.572697    2072 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:36:35.572725    2072 proxier.go:347] userspace syncProxyRules took 31.995437ms\nI0420 17:36:59.828332    2072 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.3:6443 10.130.0.5:6443]\nI0420 17:36:59.828458    2072 roundrobin.go:217] Delete endpoint 10.128.0.17:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0420 17:36:59.828515    2072 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.3:8443 10.130.0.5:8443]\nI0420 17:36:59.828556    2072 roundrobin.go:217] Delete endpoint 10.128.0.17:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0420 17:37:00.011026    2072 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:37:00.011053    2072 proxier.go:347] userspace syncProxyRules took 31.258286ms\nI0420 17:37:01.452630    2072 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.140.162:10259 10.0.142.75:10259 10.0.154.231:10259]\nI0420 17:37:01.620074    2072 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:37:01.620109    2072 proxier.go:347] userspace syncProxyRules took 38.16772ms\nF0420 17:37:12.124409    2072 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 20 17:37:18.465 E ns/openshift-sdn pod/sdn-controller-85vxq node/ip-10-0-142-75.ec2.internal container=sdn-controller container exited with code 2 (Error): I0420 17:08:04.119906       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0420 17:14:16.463569       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\nE0420 17:33:34.147039       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.141.15:6443: connect: connection refused\n
Apr 20 17:37:21.085 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 20 17:37:30.529 E ns/openshift-multus pod/multus-wt7w9 node/ip-10-0-142-75.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 20 17:37:30.949 E ns/openshift-multus pod/multus-admission-controller-bptdr node/ip-10-0-140-162.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 20 17:38:02.816 E ns/openshift-sdn pod/sdn-n9j7f node/ip-10-0-146-23.ec2.internal container=sdn container exited with code 255 (Error): default:https" at 172.30.78.116:443/TCP\nI0420 17:37:59.568612   73738 service.go:363] Adding new service port "openshift-marketplace/marketplace-operator-metrics:metrics" at 172.30.87.213:8383/TCP\nI0420 17:37:59.568629   73738 service.go:363] Adding new service port "openshift-marketplace/marketplace-operator-metrics:https-metrics" at 172.30.87.213:8081/TCP\nI0420 17:37:59.568651   73738 service.go:363] Adding new service port "default/kubernetes:https" at 172.30.0.1:443/TCP\nI0420 17:37:59.568986   73738 proxier.go:766] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0420 17:37:59.662153   73738 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:37:59.662185   73738 proxier.go:347] userspace syncProxyRules took 94.901239ms\nI0420 17:37:59.673112   73738 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:37:59.673141   73738 proxier.go:347] userspace syncProxyRules took 105.45732ms\nI0420 17:37:59.726750   73738 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:31210/tcp)\nI0420 17:37:59.727456   73738 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:31393/tcp)\nI0420 17:37:59.727694   73738 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-9575/service-test:" (:32415/tcp)\nI0420 17:37:59.762557   73738 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 31012\nI0420 17:37:59.881164   73738 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0420 17:37:59.881200   73738 cmd.go:173] openshift-sdn network plugin registering startup\nI0420 17:37:59.881337   73738 cmd.go:177] openshift-sdn network plugin ready\nI0420 17:38:02.677570   73738 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0420 17:38:02.677618   73738 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 20 17:38:10.705 E ns/openshift-multus pod/multus-admission-controller-scrfk node/ip-10-0-142-75.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 20 17:38:25.140 E ns/openshift-multus pod/multus-twgnj node/ip-10-0-140-162.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 20 17:38:28.159 E ns/openshift-sdn pod/sdn-x2q6v node/ip-10-0-140-162.ec2.internal container=sdn container exited with code 255 (Error):    93378 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:37:40.309692   93378 proxier.go:347] userspace syncProxyRules took 29.41784ms\nI0420 17:38:10.469374   93378 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:38:10.469451   93378 proxier.go:347] userspace syncProxyRules took 29.103505ms\nI0420 17:38:26.774932   93378 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.86:6443 10.129.0.3:6443 10.130.0.68:6443]\nI0420 17:38:26.775061   93378 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.86:8443 10.129.0.3:8443 10.130.0.68:8443]\nI0420 17:38:26.789355   93378 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.86:6443 10.130.0.68:6443]\nI0420 17:38:26.789476   93378 roundrobin.go:217] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0420 17:38:26.789536   93378 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.86:8443 10.130.0.68:8443]\nI0420 17:38:26.789583   93378 roundrobin.go:217] Delete endpoint 10.129.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0420 17:38:27.041240   93378 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:38:27.041394   93378 proxier.go:347] userspace syncProxyRules took 53.945813ms\nI0420 17:38:27.322835   93378 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:38:27.322867   93378 proxier.go:347] userspace syncProxyRules took 69.602464ms\nI0420 17:38:27.408845   93378 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0420 17:38:27.408882   93378 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 20 17:38:52.575 E ns/openshift-sdn pod/sdn-gf4n2 node/ip-10-0-136-123.ec2.internal container=sdn container exited with code 255 (Error): 7:38:15.174804   44330 cmd.go:173] openshift-sdn network plugin registering startup\nI0420 17:38:15.174933   44330 cmd.go:177] openshift-sdn network plugin ready\nI0420 17:38:26.773820   44330 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.86:6443 10.129.0.3:6443 10.130.0.68:6443]\nI0420 17:38:26.773861   44330 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.86:8443 10.129.0.3:8443 10.130.0.68:8443]\nI0420 17:38:26.786987   44330 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.86:6443 10.130.0.68:6443]\nI0420 17:38:26.787022   44330 roundrobin.go:217] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0420 17:38:26.787042   44330 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.86:8443 10.130.0.68:8443]\nI0420 17:38:26.787055   44330 roundrobin.go:217] Delete endpoint 10.129.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0420 17:38:26.907728   44330 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:38:26.907754   44330 proxier.go:347] userspace syncProxyRules took 28.740977ms\nI0420 17:38:27.045760   44330 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:38:27.045785   44330 proxier.go:347] userspace syncProxyRules took 27.698906ms\nI0420 17:38:44.699977   44330 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0420 17:38:51.831036   44330 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0420 17:38:51.831080   44330 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 20 17:38:57.945 E ns/openshift-multus pod/multus-admission-controller-8r5l7 node/ip-10-0-154-231.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 20 17:39:17.803 E ns/openshift-sdn pod/sdn-rcv4q node/ip-10-0-140-238.ec2.internal container=sdn container exited with code 255 (Error): s -> 172.30.0.10\nI0420 17:38:28.269134   94347 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:38:28.269163   94347 proxier.go:347] userspace syncProxyRules took 95.964801ms\nI0420 17:38:28.318668   94347 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:31393/tcp)\nI0420 17:38:28.318912   94347 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-9575/service-test:" (:32415/tcp)\nI0420 17:38:28.319125   94347 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:31210/tcp)\nI0420 17:38:28.367709   94347 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 31012\nI0420 17:38:28.375088   94347 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0420 17:38:28.375140   94347 cmd.go:173] openshift-sdn network plugin registering startup\nI0420 17:38:28.375252   94347 cmd.go:177] openshift-sdn network plugin ready\nI0420 17:38:58.208667   94347 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:38:58.208691   94347 proxier.go:347] userspace syncProxyRules took 36.006729ms\nI0420 17:39:07.986545   94347 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.86:6443 10.129.0.72:6443 10.130.0.68:6443]\nI0420 17:39:07.986589   94347 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.86:8443 10.129.0.72:8443 10.130.0.68:8443]\nI0420 17:39:08.114185   94347 proxier.go:368] userspace proxy: processing 0 service events\nI0420 17:39:08.114208   94347 proxier.go:347] userspace syncProxyRules took 26.830614ms\nI0420 17:39:17.485245   94347 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0420 17:39:17.485284   94347 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 20 17:39:31.836 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-68d7797f49-bcd8p node/ip-10-0-140-238.ec2.internal container=snapshot-controller container exited with code 255 (Error): 
Apr 20 17:39:50.145 E ns/openshift-multus pod/multus-hvtbp node/ip-10-0-154-231.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 20 17:40:31.175 E ns/openshift-multus pod/multus-s28rg node/ip-10-0-146-23.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 20 17:41:12.080 E ns/openshift-multus pod/multus-s9mjk node/ip-10-0-140-238.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 20 17:42:09.997 E ns/openshift-machine-config-operator pod/machine-config-operator-85d756f5b7-lsc4n node/ip-10-0-140-162.ec2.internal container=machine-config-operator container exited with code 2 (Error): "", Name:"machine-config", UID:"31fa144d-f14f-46ca-a0d0-15080967dfc9", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-04-20-164731}]\nE0420 17:09:14.025017       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0420 17:09:14.161478       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0420 17:09:15.179874       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0420 17:09:19.151399       1 sync.go:61] [init mode] synced RenderConfig in 5.479995388s\nI0420 17:09:19.552747       1 sync.go:61] [init mode] synced MachineConfigPools in 400.682075ms\nI0420 17:10:08.950682       1 sync.go:61] [init mode] synced MachineConfigDaemon in 49.39780779s\nI0420 17:10:18.005098       1 sync.go:61] [init mode] synced MachineConfigController in 9.054357926s\nI0420 17:10:21.313433       1 sync.go:61] [init mode] synced MachineConfigServer in 3.308286344s\nI0420 17:12:12.321544       1 sync.go:61] [init mode] synced RequiredPools in 1m51.00806453s\nI0420 17:12:12.358030       1 sync.go:85] Initialization complete\nE0420 17:14:16.525951       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Apr 20 17:44:05.415 E ns/openshift-machine-config-operator pod/machine-config-daemon-gtdd4 node/ip-10-0-140-162.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:44:10.502 E ns/openshift-machine-config-operator pod/machine-config-daemon-4dckk node/ip-10-0-140-238.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:44:20.117 E ns/openshift-machine-config-operator pod/machine-config-daemon-hznft node/ip-10-0-154-231.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:44:24.678 E ns/openshift-machine-config-operator pod/machine-config-daemon-zb4l8 node/ip-10-0-146-23.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:44:28.346 E ns/openshift-machine-config-operator pod/machine-config-daemon-zm5ts node/ip-10-0-136-123.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:44:32.056 E ns/openshift-machine-config-operator pod/machine-config-daemon-2hvrs node/ip-10-0-142-75.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:44:42.545 E ns/openshift-machine-config-operator pod/machine-config-controller-55477d4848-8gl4j node/ip-10-0-140-162.ec2.internal container=machine-config-controller container exited with code 2 (Error):  node_controller.go:452] Pool worker: node ip-10-0-136-123.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0420 17:17:39.300573       1 node_controller.go:452] Pool worker: node ip-10-0-146-23.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-afa7b49c42af860702a5a49458fbdcf9\nI0420 17:17:39.300800       1 node_controller.go:452] Pool worker: node ip-10-0-146-23.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-afa7b49c42af860702a5a49458fbdcf9\nI0420 17:17:39.300872       1 node_controller.go:452] Pool worker: node ip-10-0-146-23.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0420 17:17:40.374609       1 node_controller.go:452] Pool worker: node ip-10-0-140-238.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-afa7b49c42af860702a5a49458fbdcf9\nI0420 17:17:40.374637       1 node_controller.go:452] Pool worker: node ip-10-0-140-238.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-afa7b49c42af860702a5a49458fbdcf9\nI0420 17:17:40.374665       1 node_controller.go:452] Pool worker: node ip-10-0-140-238.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0420 17:22:55.052853       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0420 17:22:55.077605       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0420 17:30:45.410184       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0420 17:30:45.444489       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0420 17:36:01.445447       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0420 17:36:01.464916       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\n
Apr 20 17:46:32.574 E ns/openshift-machine-config-operator pod/machine-config-server-f4lnv node/ip-10-0-154-231.ec2.internal container=machine-config-server container exited with code 2 (Error): I0420 17:12:01.557835       1 start.go:38] Version: machine-config-daemon-4.4.0-202004200317-2-g60586bd3-dirty (60586bd33cec88718b868dcca195d6cae8f534b7)\nI0420 17:12:01.558552       1 api.go:51] Launching server on :22624\nI0420 17:12:01.558608       1 api.go:51] Launching server on :22623\nI0420 17:13:56.543226       1 api.go:97] Pool worker requested by 10.0.147.46:20450\nI0420 17:13:59.573891       1 api.go:97] Pool worker requested by 10.0.147.46:54979\n
Apr 20 17:46:44.067 E ns/openshift-marketplace pod/redhat-marketplace-7568885695-7hdgz node/ip-10-0-140-238.ec2.internal container=redhat-marketplace container exited with code 2 (Error): 
Apr 20 17:46:44.185 E ns/openshift-marketplace pod/certified-operators-55cbd77bdb-m72mk node/ip-10-0-140-238.ec2.internal container=certified-operators container exited with code 2 (Error): 
Apr 20 17:46:44.207 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-7b677996b6-m8pn8 node/ip-10-0-140-238.ec2.internal container=operator container exited with code 255 (Error): steroperator/csi-snapshot-controller diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-20T17:17:26Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-20T17:39:32Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-20T17:39:32Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-20T17:17:29Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0420 17:39:32.873823       1 operator.go:147] Finished syncing operator at 28.667833ms\nI0420 17:39:32.873867       1 operator.go:145] Starting syncing operator at 2020-04-20 17:39:32.873860539 +0000 UTC m=+341.398018411\nI0420 17:39:32.880143       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-storage-operator", Name:"csi-snapshot-controller-operator", UID:"22e8fe07-cd20-42b8-80c7-8297045dfc65", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False (""),Available changed from False to True ("")\nI0420 17:39:32.897286       1 operator.go:147] Finished syncing operator at 23.418158ms\nI0420 17:46:41.661441       1 operator.go:145] Starting syncing operator at 2020-04-20 17:46:41.661427445 +0000 UTC m=+770.185585411\nI0420 17:46:41.727620       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0420 17:46:41.727956       1 dynamic_serving_content.go:144] Shutting down serving-cert::/tmp/serving-cert-776024116/tls.crt::/tmp/serving-cert-776024116/tls.key\nI0420 17:46:41.728066       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0420 17:46:41.728086       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0420 17:46:41.728100       1 logging_controller.go:93] Shutting down LogLevelController\nF0420 17:46:41.728185       1 builder.go:243] stopped\n
Apr 20 17:46:44.234 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-68d7797f49-bcd8p node/ip-10-0-140-238.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Apr 20 17:46:44.257 E ns/openshift-monitoring pod/kube-state-metrics-dc4fcf85f-97gf2 node/ip-10-0-140-238.ec2.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 20 17:46:45.065 E ns/openshift-marketplace pod/community-operators-b4c7cf69-zk2n4 node/ip-10-0-140-238.ec2.internal container=community-operators container exited with code 2 (Error): 
Apr 20 17:46:45.168 E ns/openshift-kube-storage-version-migrator pod/migrator-7b88d45464-wxwsd node/ip-10-0-140-238.ec2.internal container=migrator container exited with code 2 (Error): 
Apr 20 17:46:53.300 E ns/openshift-machine-config-operator pod/machine-config-server-vtmbm node/ip-10-0-142-75.ec2.internal container=machine-config-server container exited with code 2 (Error): I0420 17:10:19.284292       1 start.go:38] Version: machine-config-daemon-4.4.0-202004200317-2-g60586bd3-dirty (60586bd33cec88718b868dcca195d6cae8f534b7)\nI0420 17:10:19.285129       1 api.go:51] Launching server on :22624\nI0420 17:10:19.285177       1 api.go:51] Launching server on :22623\n
Apr 20 17:46:58.066 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-68d7797f49-524zt node/ip-10-0-146-23.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Apr 20 17:47:08.042 E ns/openshift-console pod/console-7b78c64f78-vn8mf node/ip-10-0-154-231.ec2.internal container=console container exited with code 2 (Error): 2020-04-20T17:35:51Z cmd/main: cookies are secure!\n2020-04-20T17:35:51Z cmd/main: Binding to [::]:8443...\n2020-04-20T17:35:51Z cmd/main: using TLS\n2020-04-20T17:38:32Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n2020-04-20T17:38:32Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Apr 20 17:47:11.826 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-123.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-20T17:47:07.031Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-20T17:47:07.037Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-20T17:47:07.037Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-20T17:47:07.038Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-20T17:47:07.038Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-20T17:47:07.038Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-20T17:47:07.038Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-20T17:47:07.038Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-20T17:47:07.038Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-20T17:47:07.038Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-20T17:47:07.038Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-20T17:47:07.038Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-20T17:47:07.038Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-20T17:47:07.039Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-20T17:47:07.039Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-20T17:47:07.039Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-20
Apr 20 17:49:14.772 E ns/openshift-monitoring pod/node-exporter-lmqbw node/ip-10-0-140-238.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:46:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:46:31Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:46:39Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:46:46Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:47:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:47:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:47:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 20 17:49:14.787 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Apr 20 17:49:14.815 E ns/openshift-cluster-node-tuning-operator pod/tuned-94bdh node/ip-10-0-140-238.ec2.internal container=tuned container exited with code 143 (Error): I0420 17:35:13.047454   80099 tuned.go:169] disabling system tuned...\nI0420 17:35:13.057823   80099 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0420 17:35:13.244871   80099 tuned.go:258] recommended tuned profile openshift-node content changed\nI0420 17:35:13.939153   80099 tuned.go:417] getting recommended profile...\nI0420 17:35:14.161714   80099 tuned.go:444] active profile () != recommended profile (openshift-node)\nI0420 17:35:14.161846   80099 tuned.go:461] tuned daemon profiles changed, forcing tuned daemon reload\nI0420 17:35:14.161900   80099 tuned.go:310] starting tuned...\n2020-04-20 17:35:14,324 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-20 17:35:14,332 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-20 17:35:14,332 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-20 17:35:14,333 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-20 17:35:14,334 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-20 17:35:14,396 INFO     tuned.daemon.controller: starting controller\n2020-04-20 17:35:14,396 INFO     tuned.daemon.daemon: starting tuning\n2020-04-20 17:35:14,411 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-20 17:35:14,412 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-20 17:35:14,420 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-20 17:35:14,423 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-20 17:35:14,425 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-20 17:35:14,568 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-20 17:35:14,584 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Apr 20 17:49:14.867 E ns/openshift-multus pod/multus-lg7pj node/ip-10-0-140-238.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 20 17:49:14.892 E ns/openshift-machine-config-operator pod/machine-config-daemon-fzz6p node/ip-10-0-140-238.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:49:18.957 E ns/openshift-multus pod/multus-lg7pj node/ip-10-0-140-238.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:49:24.243 E ns/openshift-machine-config-operator pod/machine-config-daemon-fzz6p node/ip-10-0-140-238.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 20 17:49:33.046 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-controller-manager-recovery-controller container exited with code 1 (Error): ck openshift-kube-controller-manager/cert-recovery-controller-lock: configmaps "cert-recovery-controller-lock" is forbidden: User "system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nI0420 17:32:48.206607       1 leaderelection.go:252] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock\nI0420 17:32:48.209153       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"0a1f8b89-96a3-4fad-a446-36d32f84772c", APIVersion:"v1", ResourceVersion:"26945", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cacffd86-bf51-4c7f-99e1-7b68927dfb8b became leader\nI0420 17:32:48.213665       1 csrcontroller.go:98] Starting CSR controller\nI0420 17:32:48.213691       1 shared_informer.go:197] Waiting for caches to sync for CSRController\nI0420 17:32:48.214203       1 client_cert_rotation_controller.go:140] Starting CertRotationController - "CSRSigningCert"\nI0420 17:32:48.214288       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - "CSRSigningCert"\nI0420 17:32:48.313911       1 shared_informer.go:204] Caches are synced for CSRController \nI0420 17:32:48.313955       1 resourcesync_controller.go:218] Starting ResourceSyncController\nI0420 17:32:48.314507       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "CSRSigningCert"\nI0420 17:47:13.340542       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0420 17:47:13.341358       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\nI0420 17:47:13.341399       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0420 17:47:13.341416       1 csrcontroller.go:100] Shutting down CSR controller\nI0420 17:47:13.341428       1 csrcontroller.go:102] CSR controller shut down\n
Apr 20 17:49:33.046 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=cluster-policy-controller container exited with code 1 (Error): icasets.apps)\nE0420 17:31:39.444428       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: unknown (get serviceaccounts)\nE0420 17:31:39.444452       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps)\nE0420 17:31:39.445938       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0420 17:31:39.445980       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: unknown (get builds.build.openshift.io)\nE0420 17:31:39.446013       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)\nE0420 17:31:39.446040       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: unknown (get podtemplates)\nE0420 17:31:39.446071       1 reflector.go:307] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: unknown (get imagestreams.image.openshift.io)\nE0420 17:31:39.446104       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0420 17:31:39.446131       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: unknown (get limitranges)\nE0420 17:31:39.446158       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: unknown (get namespaces)\nW0420 17:46:49.207063       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 699; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 20 17:49:33.046 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 7292       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:46:39.848601       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:46:43.710290       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:46:43.710699       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:46:49.869073       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:46:49.869442       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:46:53.721348       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:46:53.721807       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:46:59.878827       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:46:59.879338       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:47:03.732830       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:47:03.733717       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:47:09.902115       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:47:09.902459       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 20 17:49:33.046 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-controller-manager container exited with code 2 (Error): efused\nE0420 17:31:22.306946       1 webhook.go:109] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0420 17:31:22.306986       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0420 17:31:24.335784       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0420 17:31:28.561484       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0420 17:31:29.482609       1 webhook.go:109] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0420 17:31:29.482644       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0420 17:31:32.865046       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0420 17:31:39.450025       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Apr 20 17:49:33.101 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:46:53.373740       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:46:53.373770       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:46:55.389041       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:46:55.389067       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:46:57.400281       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:46:57.400315       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:46:59.411277       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:46:59.411312       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:47:01.419285       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:47:01.419319       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:47:03.436395       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:47:03.436433       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:47:05.448203       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:47:05.448236       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:47:07.546516       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:47:07.546660       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:47:09.475100       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:47:09.475230       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:47:11.488407       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:47:11.488438       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 20 17:49:33.101 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-scheduler container exited with code 2 (Error): 79] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-04-20 16:55:07 +0000 UTC to 2030-04-18 16:55:07 +0000 UTC (now=2020-04-20 17:32:29.553386981 +0000 UTC))\nI0420 17:32:29.553431       1 tlsconfig.go:179] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1587402562" [] issuer="kubelet-signer" (2020-04-20 17:09:21 +0000 UTC to 2020-04-21 16:55:12 +0000 UTC (now=2020-04-20 17:32:29.553416525 +0000 UTC))\nI0420 17:32:29.553466       1 tlsconfig.go:179] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-04-20 16:55:10 +0000 UTC to 2020-04-21 16:55:10 +0000 UTC (now=2020-04-20 17:32:29.553452522 +0000 UTC))\nI0420 17:32:29.553692       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1587402563" (2020-04-20 17:09:32 +0000 UTC to 2022-04-20 17:09:33 +0000 UTC (now=2020-04-20 17:32:29.553681463 +0000 UTC))\nI0420 17:32:29.553912       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1587403949" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587403949" (2020-04-20 16:32:28 +0000 UTC to 2021-04-20 16:32:28 +0000 UTC (now=2020-04-20 17:32:29.553874254 +0000 UTC))\n
Apr 20 17:49:33.171 E ns/openshift-controller-manager pod/controller-manager-ksmml node/ip-10-0-154-231.ec2.internal container=controller-manager container exited with code 255 (Error): I0420 17:34:26.133689       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0420 17:34:26.136186       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-ty1sl3vp/stable@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0420 17:34:26.136214       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-ty1sl3vp/stable@sha256:7291b8d33c03cf2f563efef5bc757e362782144d67258bba957d61fdccf2a48d"\nI0420 17:34:26.136335       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0420 17:34:26.136447       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 20 17:49:33.237 E ns/openshift-cluster-node-tuning-operator pod/tuned-rpwwg node/ip-10-0-154-231.ec2.internal container=tuned container exited with code 143 (Error): oller: starting controller\n2020-04-20 17:34:35,946 INFO     tuned.daemon.daemon: starting tuning\n2020-04-20 17:34:35,958 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-20 17:34:35,959 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-20 17:34:35,964 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-20 17:34:35,967 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-20 17:34:35,969 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-20 17:34:36,151 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-20 17:34:36,168 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0420 17:36:00.292806   79915 tuned.go:554] tuned "rendered" changed\nI0420 17:36:00.292834   79915 tuned.go:224] extracting tuned profiles\nI0420 17:36:00.292843   79915 tuned.go:417] getting recommended profile...\nI0420 17:36:00.293493   79915 tuned.go:513] profile "ip-10-0-154-231.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0420 17:36:00.428156   79915 tuned.go:258] recommended tuned profile openshift-control-plane content unchanged\nI0420 17:36:00.649623   79915 tuned.go:417] getting recommended profile...\nI0420 17:36:00.783505   79915 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0420 17:46:53.892875   79915 tuned.go:513] profile "ip-10-0-154-231.ec2.internal" changed, tuned profile requested: openshift-node\nI0420 17:46:53.975858   79915 tuned.go:513] profile "ip-10-0-154-231.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0420 17:46:54.649680   79915 tuned.go:417] getting recommended profile...\nI0420 17:46:54.814870   79915 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Apr 20 17:49:33.277 E ns/openshift-monitoring pod/node-exporter-drqjm node/ip-10-0-154-231.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:46:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:46:14Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:46:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:46:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:46:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:46:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:47:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 20 17:49:33.318 E ns/openshift-sdn pod/sdn-controller-t44bn node/ip-10-0-154-231.ec2.internal container=sdn-controller container exited with code 2 (Error): I0420 17:37:05.847196       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0420 17:37:05.873149       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"6b512fc5-60bd-4cb8-b8e9-84593317722d", ResourceVersion:"31914", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722999276, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-154-231\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-20T17:07:56Z\",\"renewTime\":\"2020-04-20T17:37:05Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-154-231 became leader'\nI0420 17:37:05.873254       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0420 17:37:05.878140       1 master.go:51] Initializing SDN master\nI0420 17:37:05.894186       1 network_controller.go:61] Started OpenShift Network Controller\n
Apr 20 17:49:33.398 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Apr 20 17:49:33.441 E ns/openshift-multus pod/multus-admission-controller-2cvc2 node/ip-10-0-154-231.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Apr 20 17:49:33.492 E ns/openshift-multus pod/multus-fzfdm node/ip-10-0-154-231.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 20 17:49:33.536 E ns/openshift-machine-config-operator pod/machine-config-daemon-hcvpf node/ip-10-0-154-231.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:49:33.553 E ns/openshift-machine-config-operator pod/machine-config-server-th9mc node/ip-10-0-154-231.ec2.internal container=machine-config-server container exited with code 2 (Error): I0420 17:46:34.612386       1 start.go:38] Version: machine-config-daemon-4.4.0-202004200317-2-g60586bd3-dirty (60586bd33cec88718b868dcca195d6cae8f534b7)\nI0420 17:46:34.613550       1 api.go:51] Launching server on :22624\nI0420 17:46:34.613695       1 api.go:51] Launching server on :22623\n
Apr 20 17:49:37.265 E ns/openshift-monitoring pod/thanos-querier-6c9dd6868b-c76c6 node/ip-10-0-136-123.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/04/20 17:46:51 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/20 17:46:51 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/20 17:46:51 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/20 17:46:51 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/20 17:46:51 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/20 17:46:51 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/20 17:46:51 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/20 17:46:51 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0420 17:46:51.203443       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/20 17:46:51 http.go:107: HTTPS: listening on [::]:9091\n
Apr 20 17:49:37.440 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-apiserver container exited with code 1 (Error): ision has been compacted\nE0420 17:47:13.672284       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.672369       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.672487       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.672512       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.672642       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.672716       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.672760       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.672817       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.673552       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.673589       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.673620       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.679866       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.681272       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.681300       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.681343       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.681371       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0420 17:47:13.683785       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\n
Apr 20 17:49:37.440 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0420 17:30:13.246079       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 20 17:49:37.440 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 17:47:10.873778       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:47:10.874228       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 17:47:13.618806       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:47:13.622319       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 20 17:49:37.440 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): lancer.go:26] syncing external loadbalancer hostnames: api.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com\nI0420 17:44:50.016174       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0420 17:47:13.399804       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0420 17:47:13.414979       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0420 17:47:13.415645       1 certrotationcontroller.go:556] Shutting down CertRotation\nI0420 17:47:13.415663       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0420 17:47:13.415682       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0420 17:47:13.415700       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0420 17:47:13.415739       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0420 17:47:13.415755       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0420 17:47:13.415774       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0420 17:47:13.415811       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0420 17:47:13.415826       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0420 17:47:13.415962       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0420 17:47:13.415977       1 cabundlesyncer.go:86] CA bundle controller shut down\nF0420 17:47:13.619203       1 leaderelection.go:67] leaderelection lost\n
Apr 20 17:49:37.792 E ns/openshift-etcd pod/etcd-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-20 17:30:40.704062 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-154-231.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-154-231.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-20 17:30:40.705238 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-20 17:30:40.705877 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-154-231.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-154-231.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/20 17:30:40 grpc: addrConn.createTransport failed to connect to {https://10.0.154.231:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.154.231:9978: connect: connection refused". Reconnecting...\n2020-04-20 17:30:40.709141 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/20 17:30:41 grpc: addrConn.createTransport failed to connect to {https://10.0.154.231:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.154.231:9978: connect: connection refused". Reconnecting...\n
Apr 20 17:49:39.321 E ns/openshift-monitoring pod/telemeter-client-7ff45c5494-zvpgh node/ip-10-0-136-123.ec2.internal container=reload container exited with code 2 (Error): 
Apr 20 17:49:39.321 E ns/openshift-monitoring pod/telemeter-client-7ff45c5494-zvpgh node/ip-10-0-136-123.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Apr 20 17:49:39.377 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-123.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/04/20 17:47:10 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 20 17:49:39.377 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-123.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 2020/04/20 17:47:10 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/20 17:47:10 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/20 17:47:10 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/20 17:47:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/20 17:47:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/20 17:47:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/20 17:47:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/20 17:47:10 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/20 17:47:10 http.go:107: HTTPS: listening on [::]:9091\nI0420 17:47:10.877616       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 20 17:49:39.377 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-123.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-20T17:47:10.106829121Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-04-20T17:47:10.106951092Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-04-20T17:47:10.108302944Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-20T17:47:15.250086912Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Apr 20 17:49:39.966 E ns/openshift-multus pod/multus-fzfdm node/ip-10-0-154-231.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:49:43.667 E ns/openshift-machine-config-operator pod/machine-config-daemon-hcvpf node/ip-10-0-154-231.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 20 17:49:48.476 E ns/openshift-multus pod/multus-fzfdm node/ip-10-0-154-231.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:49:54.563 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-238.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-20T17:49:53.062Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-20T17:49:53.065Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-20T17:49:53.065Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-20T17:49:53.066Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-20T17:49:53.066Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-20T17:49:53.066Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-20T17:49:53.066Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-20T17:49:53.066Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-20T17:49:53.066Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-20T17:49:53.066Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-20T17:49:53.066Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-20T17:49:53.066Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-20T17:49:53.066Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-20T17:49:53.066Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-20T17:49:53.068Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-20T17:49:53.068Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-20
Apr 20 17:50:01.150 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: ip-10-0-154-231.ec2.internal members are unhealthy,  members are unknown
Apr 20 17:50:26.068 E ns/openshift-insights pod/insights-operator-7f4b54dfbf-6tj5c node/ip-10-0-140-162.ec2.internal container=operator container exited with code 2 (Error): 0562]\nI0420 17:47:38.830104       1 httplog.go:90] GET /metrics: (1.711806ms) 200 [Prometheus/2.15.2 10.128.2.27:46350]\nI0420 17:47:58.901875       1 httplog.go:90] GET /metrics: (12.84248ms) 200 [Prometheus/2.15.2 10.131.0.27:40562]\nI0420 17:48:08.830156       1 httplog.go:90] GET /metrics: (1.72662ms) 200 [Prometheus/2.15.2 10.128.2.27:46350]\nI0420 17:48:28.895372       1 httplog.go:90] GET /metrics: (6.382174ms) 200 [Prometheus/2.15.2 10.131.0.27:40562]\nI0420 17:48:38.830244       1 httplog.go:90] GET /metrics: (1.82197ms) 200 [Prometheus/2.15.2 10.128.2.27:46350]\nI0420 17:48:41.965854       1 diskrecorder.go:303] Found files to send: [/var/lib/insights-operator/insights-2020-04-20-174701.tar.gz]\nI0420 17:48:41.965929       1 insightsuploader.go:126] Uploading latest report since 0001-01-01T00:00:00Z\nI0420 17:48:41.976403       1 insightsclient.go:164] Uploading application/vnd.redhat.openshift.periodic to https://cloud.redhat.com/api/ingress/v1/upload\nI0420 17:48:42.211567       1 insightsclient.go:214] Successfully reported id=2020-04-20T17:48:41Z x-rh-insights-request-id=73718c2d1c0a402e9929a3b15f666d36, wrote=14175\nI0420 17:48:42.211660       1 insightsuploader.go:150] Uploaded report successfully in 245.735082ms\nI0420 17:48:42.211684       1 status.go:89] Initializing last reported time to 2020-04-20T17:48:41Z\nI0420 17:48:42.216019       1 status.go:298] The operator is healthy\nI0420 17:48:58.894601       1 httplog.go:90] GET /metrics: (5.553354ms) 200 [Prometheus/2.15.2 10.131.0.27:40562]\nI0420 17:49:00.979440       1 status.go:298] The operator is healthy\nI0420 17:49:08.830076       1 httplog.go:90] GET /metrics: (1.689086ms) 200 [Prometheus/2.15.2 10.128.2.27:46350]\nI0420 17:49:28.895518       1 httplog.go:90] GET /metrics: (6.563806ms) 200 [Prometheus/2.15.2 10.131.0.27:40562]\nI0420 17:49:38.830292       1 httplog.go:90] GET /metrics: (1.760451ms) 200 [Prometheus/2.15.2 10.128.2.27:46350]\nI0420 17:50:08.837447       1 httplog.go:90] GET /metrics: (8.794922ms) 200 [Prometheus/2.15.2 10.128.2.27:46350]\n
Apr 20 17:50:30.081 E ns/openshift-machine-config-operator pod/machine-config-controller-f84bc667-cgkc4 node/ip-10-0-140-162.ec2.internal container=machine-config-controller container exited with code 2 (Error): 8\nI0420 17:49:26.127454       1 node_controller.go:435] Pool worker: node ip-10-0-140-238.ec2.internal is now reporting ready\nI0420 17:49:31.108806       1 node_controller.go:758] Setting node ip-10-0-136-123.ec2.internal to desired config rendered-worker-646d8377f769f36fcddaf771ac0511c8\nI0420 17:49:31.125507       1 node_controller.go:452] Pool worker: node ip-10-0-136-123.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-646d8377f769f36fcddaf771ac0511c8\nI0420 17:49:32.139086       1 node_controller.go:452] Pool worker: node ip-10-0-136-123.ec2.internal changed machineconfiguration.openshift.io/state = Working\nI0420 17:49:32.171109       1 node_controller.go:433] Pool worker: node ip-10-0-136-123.ec2.internal is now reporting unready: node ip-10-0-136-123.ec2.internal is reporting Unschedulable\nI0420 17:49:32.889059       1 node_controller.go:433] Pool master: node ip-10-0-154-231.ec2.internal is now reporting unready: node ip-10-0-154-231.ec2.internal is reporting Unschedulable\nI0420 17:50:14.213548       1 node_controller.go:442] Pool master: node ip-10-0-154-231.ec2.internal has completed update to rendered-master-627da9f410896cf5256d92a14ec96763\nI0420 17:50:14.225375       1 node_controller.go:435] Pool master: node ip-10-0-154-231.ec2.internal is now reporting ready\nI0420 17:50:19.213809       1 node_controller.go:758] Setting node ip-10-0-140-162.ec2.internal to desired config rendered-master-627da9f410896cf5256d92a14ec96763\nI0420 17:50:19.230468       1 node_controller.go:452] Pool master: node ip-10-0-140-162.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-627da9f410896cf5256d92a14ec96763\nI0420 17:50:20.247507       1 node_controller.go:452] Pool master: node ip-10-0-140-162.ec2.internal changed machineconfiguration.openshift.io/state = Working\nI0420 17:50:20.270464       1 node_controller.go:433] Pool master: node ip-10-0-140-162.ec2.internal is now reporting unready: node ip-10-0-140-162.ec2.internal is reporting Unschedulable\n
Apr 20 17:51:38.113 E ns/openshift-marketplace pod/redhat-marketplace-7568885695-9wqpw node/ip-10-0-146-23.ec2.internal container=redhat-marketplace container exited with code 2 (Error): 
Apr 20 17:52:15.657 E ns/openshift-monitoring pod/node-exporter-nz7ng node/ip-10-0-136-123.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:49:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:49:35Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:49:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:05Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:20Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 20 17:52:15.695 E ns/openshift-cluster-node-tuning-operator pod/tuned-4tzm2 node/ip-10-0-136-123.ec2.internal container=tuned container exited with code 143 (Error): n automatic mode, checking what profile is recommended for your configuration.\n2020-04-20 17:35:56,047 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-20 17:35:56,047 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-20 17:35:56,080 INFO     tuned.daemon.controller: starting controller\n2020-04-20 17:35:56,080 INFO     tuned.daemon.daemon: starting tuning\n2020-04-20 17:35:56,091 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-20 17:35:56,092 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-20 17:35:56,095 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-20 17:35:56,096 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-20 17:35:56,098 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-20 17:35:56,238 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-20 17:35:56,250 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0420 17:47:13.764742   40574 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0420 17:47:13.765214   40574 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0420 17:47:15.049956   40574 tuned.go:513] profile "ip-10-0-136-123.ec2.internal" changed, tuned profile requested: openshift-node\nI0420 17:47:15.056555   40574 tuned.go:554] tuned "rendered" changed\nI0420 17:47:15.056576   40574 tuned.go:224] extracting tuned profiles\nI0420 17:47:15.056592   40574 tuned.go:417] getting recommended profile...\nI0420 17:47:15.187702   40574 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0420 17:47:15.817412   40574 tuned.go:417] getting recommended profile...\nI0420 17:47:15.931288   40574 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\n
Apr 20 17:52:15.737 E ns/openshift-multus pod/multus-gd8fb node/ip-10-0-136-123.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 20 17:52:15.768 E ns/openshift-machine-config-operator pod/machine-config-daemon-578jb node/ip-10-0-136-123.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:52:17.599 E ns/openshift-monitoring pod/node-exporter-nz7ng node/ip-10-0-136-123.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:52:19.647 E ns/openshift-multus pod/multus-gd8fb node/ip-10-0-136-123.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:52:24.536 E ns/openshift-machine-config-operator pod/machine-config-daemon-578jb node/ip-10-0-136-123.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 20 17:52:44.567 E ns/openshift-monitoring pod/thanos-querier-6c9dd6868b-9lv6z node/ip-10-0-146-23.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/04/20 17:34:49 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/20 17:34:49 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/20 17:34:49 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/20 17:34:49 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/20 17:34:49 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/20 17:34:49 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/20 17:34:49 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/20 17:34:49 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0420 17:34:49.872635       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/20 17:34:49 http.go:107: HTTPS: listening on [::]:9091\n
Apr 20 17:52:45.632 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-146-23.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/04/20 17:47:00 Watching directory: "/etc/alertmanager/config"\n
Apr 20 17:52:45.632 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-146-23.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/04/20 17:47:00 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/20 17:47:00 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/20 17:47:00 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/20 17:47:00 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/20 17:47:00 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/20 17:47:00 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/20 17:47:00 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/20 17:47:00 http.go:107: HTTPS: listening on [::]:9095\nI0420 17:47:00.379708       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 20 17:53:00.752 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-136-123.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-20T17:52:58.459Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-20T17:52:58.466Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-20T17:52:58.469Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-20T17:52:58.470Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-20T17:52:58.470Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-20T17:52:58.470Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-20T17:52:58.470Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-20T17:52:58.470Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-20T17:52:58.470Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-20T17:52:58.470Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-20T17:52:58.471Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-20T17:52:58.471Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-20T17:52:58.472Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-20T17:52:58.472Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-20T17:52:58.474Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-20T17:52:58.475Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-20
Apr 20 17:53:01.587 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-140-162.ec2.internal" not ready since 2020-04-20 17:51:33 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-140-162.ec2.internal members are unhealthy,  members are unknown
Apr 20 17:53:15.178 E ns/openshift-sdn pod/sdn-controller-jzznc node/ip-10-0-140-162.ec2.internal container=sdn-controller container exited with code 2 (Error): I0420 17:37:17.130820       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 20 17:53:15.195 E ns/openshift-multus pod/multus-admission-controller-t448t node/ip-10-0-140-162.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Apr 20 17:53:15.226 E ns/openshift-multus pod/multus-ggwmj node/ip-10-0-140-162.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 20 17:53:15.243 E ns/openshift-machine-config-operator pod/machine-config-server-p5s45 node/ip-10-0-140-162.ec2.internal container=machine-config-server container exited with code 2 (Error): I0420 17:46:48.098185       1 start.go:38] Version: machine-config-daemon-4.4.0-202004200317-2-g60586bd3-dirty (60586bd33cec88718b868dcca195d6cae8f534b7)\nI0420 17:46:48.100170       1 api.go:51] Launching server on :22624\nI0420 17:46:48.100304       1 api.go:51] Launching server on :22623\n
Apr 20 17:53:15.259 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=cluster-policy-controller container exited with code 1 (Error): I0420 17:33:23.326300       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0420 17:33:23.328089       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0420 17:33:23.329901       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0420 17:33:23.329984       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0420 17:36:03.846838       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:23.352375       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Apr 20 17:53:15.259 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 1721       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:50:23.062092       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:50:31.377939       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:50:31.378309       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:50:33.069868       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:50:33.070227       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:50:41.401544       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:50:41.402014       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:50:43.086156       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:50:43.086618       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:50:51.398597       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:50:51.398922       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:50:53.104569       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:50:53.104969       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 20 17:53:15.259 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-controller-manager container exited with code 2 (Error): er-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1587402563" (2020-04-20 17:09:35 +0000 UTC to 2022-04-20 17:09:36 +0000 UTC (now=2020-04-20 17:36:11.215487021 +0000 UTC))\nI0420 17:36:11.216084       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1587404171" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587404170" (2020-04-20 16:36:10 +0000 UTC to 2021-04-20 16:36:10 +0000 UTC (now=2020-04-20 17:36:11.216063193 +0000 UTC))\nI0420 17:36:11.216164       1 secure_serving.go:178] Serving securely on [::]:10257\nI0420 17:36:11.216255       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0420 17:36:11.216707       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nE0420 17:36:11.217825       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:14.696899       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:18.292267       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:24.771997       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\n
Apr 20 17:53:15.259 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): W0420 17:32:48.850085       1 cmd.go:200] Using insecure, self-signed certificates\nI0420 17:32:48.850398       1 crypto.go:588] Generating new CA for cert-recovery-controller-signer@1587403968 cert, and key in /tmp/serving-cert-523607254/serving-signer.crt, /tmp/serving-cert-523607254/serving-signer.key\nI0420 17:32:49.987163       1 observer_polling.go:155] Starting file observer\nI0420 17:32:50.017050       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cert-recovery-controller-lock...\nE0420 17:36:14.777779       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:29.629279       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: configmaps "cert-recovery-controller-lock" is forbidden: User "system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager": RBAC: clusterrole.rbac.authorization.k8s.io "system:image-puller" not found\nI0420 17:50:56.326043       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0420 17:50:56.326177       1 leaderelection.go:67] leaderelection lost\n
Apr 20 17:53:15.276 E ns/openshift-cluster-node-tuning-operator pod/tuned-ggmd7 node/ip-10-0-140-162.ec2.internal container=tuned container exited with code 143 (Error): o:461] tuned daemon profiles changed, forcing tuned daemon reload\nI0420 17:35:42.127211   89336 tuned.go:310] starting tuned...\n2020-04-20 17:35:42,292 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-20 17:35:42,307 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-20 17:35:42,307 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-20 17:35:42,307 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-20 17:35:42,308 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-20 17:35:42,391 INFO     tuned.daemon.controller: starting controller\n2020-04-20 17:35:42,392 INFO     tuned.daemon.daemon: starting tuning\n2020-04-20 17:35:42,406 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-20 17:35:42,407 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-20 17:35:42,413 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-20 17:35:42,415 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-20 17:35:42,417 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-20 17:35:42,613 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-20 17:35:42,627 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0420 17:46:53.904327   89336 tuned.go:513] profile "ip-10-0-140-162.ec2.internal" changed, tuned profile requested: openshift-node\nI0420 17:46:53.962969   89336 tuned.go:417] getting recommended profile...\nI0420 17:46:53.987481   89336 tuned.go:513] profile "ip-10-0-140-162.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0420 17:46:54.236387   89336 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Apr 20 17:53:15.293 E ns/openshift-machine-config-operator pod/machine-config-daemon-mw7wm node/ip-10-0-140-162.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:53:15.336 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-apiserver container exited with code 1 (Error): new addresses to cc: [{https://10.0.140.162:2379 0  <nil>} {https://10.0.142.75:2379 0  <nil>} {https://10.0.154.231:2379 0  <nil>} {https://localhost:2379 0  <nil>}]\nI0420 17:50:37.389585       1 store.go:1342] Monitoring profiles.tuned.openshift.io count at <storage-prefix>//tuned.openshift.io/profiles\nI0420 17:50:42.080429       1 trace.go:116] Trace[2096850606]: "List" url:/api/v1/secrets,user-agent:olm/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.0.154.231 (started: 2020-04-20 17:50:41.529879357 +0000 UTC m=+856.285283268) (total time: 550.515747ms):\nTrace[2096850606]: [550.512867ms] [549.40737ms] Writing http response done count:1155\nE0420 17:50:45.274609       1 available_controller.go:415] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0420 17:50:52.450265       1 controller.go:114] loading OpenAPI spec for "v1.packages.operators.coreos.com" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error trying to reach service: 'x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "Red Hat, Inc.")', Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]\nI0420 17:50:52.450310       1 controller.go:127] OpenAPI AggregationController: action for item v1.packages.operators.coreos.com: Rate Limited Requeue.\nI0420 17:50:56.335529       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-140-162.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0420 17:50:56.335572       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Apr 20 17:53:15.336 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0420 17:34:52.345133       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 20 17:53:15.336 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 17:50:37.819515       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:50:37.819894       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 17:50:47.858610       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:50:47.858965       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 20 17:53:15.336 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0420 17:34:51.914040       1 cmd.go:200] Using insecure, self-signed certificates\nI0420 17:34:51.914507       1 crypto.go:588] Generating new CA for cert-regeneration-controller-signer@1587404091 cert, and key in /tmp/serving-cert-784889042/serving-signer.crt, /tmp/serving-cert-784889042/serving-signer.key\nI0420 17:34:52.523716       1 observer_polling.go:155] Starting file observer\nI0420 17:34:52.557684       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nE0420 17:36:09.069223       1 leaderelection.go:331] error retrieving resource lock openshift-kube-apiserver/cert-regeneration-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nE0420 17:36:23.448977       1 leaderelection.go:331] error retrieving resource lock openshift-kube-apiserver/cert-regeneration-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nI0420 17:50:56.279612       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0420 17:50:56.279666       1 leaderelection.go:67] leaderelection lost\n
Apr 20 17:53:15.357 E ns/openshift-controller-manager pod/controller-manager-7zh4s node/ip-10-0-140-162.ec2.internal container=controller-manager container exited with code 1 (Error): I0420 17:34:29.743527       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0420 17:34:29.747223       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-ty1sl3vp/stable@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0420 17:34:29.747302       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-ty1sl3vp/stable@sha256:7291b8d33c03cf2f563efef5bc757e362782144d67258bba957d61fdccf2a48d"\nI0420 17:34:29.747386       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0420 17:34:29.747695       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 20 17:53:15.396 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-scheduler container exited with code 2 (Error): 021-04-20 16:55:12 +0000 UTC (now=2020-04-20 17:36:30.747056526 +0000 UTC))\nI0420 17:36:30.747121       1 tlsconfig.go:179] loaded client CA [3/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file"]: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-04-20 16:55:13 +0000 UTC to 2021-04-20 16:55:13 +0000 UTC (now=2020-04-20 17:36:30.747094325 +0000 UTC))\nI0420 17:36:30.747150       1 tlsconfig.go:179] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-04-20 16:55:07 +0000 UTC to 2030-04-18 16:55:07 +0000 UTC (now=2020-04-20 17:36:30.747134032 +0000 UTC))\nI0420 17:36:30.747194       1 tlsconfig.go:179] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file"]: "kube-csr-signer_@1587402562" [] issuer="kubelet-signer" (2020-04-20 17:09:21 +0000 UTC to 2020-04-21 16:55:12 +0000 UTC (now=2020-04-20 17:36:30.747161789 +0000 UTC))\nI0420 17:36:30.747634       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1587402563" (2020-04-20 17:09:32 +0000 UTC to 2022-04-20 17:09:33 +0000 UTC (now=2020-04-20 17:36:30.747610937 +0000 UTC))\nI0420 17:36:30.749902       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1587404170" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1587404169" (2020-04-20 16:36:09 +0000 UTC to 2021-04-20 16:36:09 +0000 UTC (now=2020-04-20 17:36:30.749885978 +0000 UTC))\nI0420 17:36:30.789866       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n
Apr 20 17:53:15.396 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:50:36.565126       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:50:36.565292       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:50:38.576167       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:50:38.576307       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:50:40.596886       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:50:40.597022       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:50:42.599387       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:50:42.599418       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:50:44.610210       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:50:44.610238       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:50:46.621906       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:50:46.621937       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:50:48.630631       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:50:48.630676       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:50:50.637546       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:50:50.637580       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:50:52.648595       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:50:52.648774       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:50:54.658723       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:50:54.658752       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 20 17:53:15.414 E ns/openshift-monitoring pod/node-exporter-z2mth node/ip-10-0-140-162.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:50:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 20 17:53:20.028 E ns/openshift-etcd pod/etcd-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-20 17:30:06.104443 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-140-162.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-140-162.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-20 17:30:06.105111 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-20 17:30:06.105459 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-140-162.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-140-162.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/20 17:30:06 grpc: addrConn.createTransport failed to connect to {https://10.0.140.162:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.140.162:9978: connect: connection refused". Reconnecting...\n2020-04-20 17:30:06.107594 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/20 17:30:07 grpc: addrConn.createTransport failed to connect to {https://10.0.140.162:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.140.162:9978: connect: connection refused". Reconnecting...\n
Apr 20 17:53:21.249 E ns/openshift-multus pod/multus-ggwmj node/ip-10-0-140-162.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:53:23.783 E ns/openshift-multus pod/multus-ggwmj node/ip-10-0-140-162.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:53:27.127 E ns/openshift-multus pod/multus-ggwmj node/ip-10-0-140-162.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:53:29.450 E ns/openshift-machine-config-operator pod/machine-config-daemon-mw7wm node/ip-10-0-140-162.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 20 17:53:41.522 E ns/openshift-etcd-operator pod/etcd-operator-557ddff46b-tjzzs node/ip-10-0-142-75.ec2.internal container=operator container exited with code 255 (Error): ] Shutting down BootstrapTeardownController\nI0420 17:53:40.382941       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0420 17:53:40.382955       1 etcdcertsignercontroller.go:118] Shutting down EtcdCertSignerController\nI0420 17:53:40.382969       1 targetconfigcontroller.go:270] Shutting down TargetConfigController\nI0420 17:53:40.382989       1 base_controller.go:74] Shutting down RevisionController ...\nI0420 17:53:40.383003       1 envvarcontroller.go:177] Shutting down EnvVarController\nI0420 17:53:40.383019       1 base_controller.go:74] Shutting down NodeController ...\nI0420 17:53:40.383031       1 etcdmemberipmigrator.go:299] Shutting down EtcdMemberIPMigrator\nI0420 17:53:40.383045       1 clustermembercontroller.go:99] Shutting down ClusterMemberController\nI0420 17:53:40.383061       1 base_controller.go:74] Shutting down InstallerController ...\nI0420 17:53:40.383076       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0420 17:53:40.383089       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0420 17:53:40.383101       1 status_controller.go:212] Shutting down StatusSyncer-etcd\nI0420 17:53:40.383116       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0420 17:53:40.383128       1 scriptcontroller.go:161] Shutting down ScriptControllerController\nI0420 17:53:40.383142       1 host_endpoints_controller.go:263] Shutting down HostEtcdEndpointsController\nI0420 17:53:40.383158       1 base_controller.go:74] Shutting down  ...\nI0420 17:53:40.383172       1 base_controller.go:74] Shutting down PruneController ...\nI0420 17:53:40.383185       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0420 17:53:40.383199       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0420 17:53:40.383213       1 base_controller.go:74] Shutting down  ...\nI0420 17:53:40.383230       1 etcdmemberscontroller.go:192] Shutting down EtcdMembersController\nF0420 17:53:40.383744       1 builder.go:243] stopped\n
Apr 20 17:53:44.695 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-fb74b5bc-jh99d node/ip-10-0-142-75.ec2.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): ator", UID:"b3da8b4f-4076-4ce2-9802-08c206996b45", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nW0420 17:47:13.793313       1 reflector.go:340] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: very short watch: k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Unexpected watch close - watch lasted less than a second and no items received\nW0420 17:47:13.798046       1 reflector.go:340] k8s.io/client-go/informers/factory.go:135: watch of *v1.Secret ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\nI0420 17:49:32.811119       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"b3da8b4f-4076-4ce2-9802-08c206996b45", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from True to False ("Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available")\nI0420 17:49:36.426221       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"b3da8b4f-4076-4ce2-9802-08c206996b45", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0420 17:53:43.404632       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0420 17:53:43.404700       1 leaderelection.go:66] leaderelection lost\n
Apr 20 17:53:55.949 E ns/openshift-operator-lifecycle-manager pod/packageserver-7fc5cc888-xplrt node/ip-10-0-140-162.ec2.internal container=packageserver container exited with code 2 (Error): 
Apr 20 17:54:34.440 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: reconciling Grafana Dashboard Definitions ConfigMaps failed: updating ConfigMap object failed: Put https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/grafana-dashboard-cluster-total: dial tcp 172.30.0.1:443: connect: connection refused
Apr 20 17:55:05.943 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:55:22.013 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:55:25.273 E ns/openshift-cluster-node-tuning-operator pod/tuned-fcm52 node/ip-10-0-146-23.ec2.internal container=tuned container exited with code 143 (Error): ng controller\n2020-04-20 17:34:55,849 INFO     tuned.daemon.daemon: starting tuning\n2020-04-20 17:34:55,864 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-20 17:34:55,865 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-20 17:34:55,870 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-20 17:34:55,872 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-20 17:34:55,874 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-20 17:34:56,093 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-20 17:34:56,111 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0420 17:47:13.767040   63293 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0420 17:47:13.768330   63293 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0420 17:47:15.475583   63293 tuned.go:513] profile "ip-10-0-146-23.ec2.internal" changed, tuned profile requested: openshift-node\nI0420 17:47:15.476615   63293 tuned.go:554] tuned "rendered" changed\nI0420 17:47:15.476643   63293 tuned.go:224] extracting tuned profiles\nI0420 17:47:15.476653   63293 tuned.go:417] getting recommended profile...\nI0420 17:47:15.652171   63293 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0420 17:47:16.453421   63293 tuned.go:417] getting recommended profile...\nI0420 17:47:16.632680   63293 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\nI0420 17:50:58.710233   63293 tuned.go:554] tuned "rendered" changed\nI0420 17:50:58.710261   63293 tuned.go:224] extracting tuned profiles\nI0420 17:50:58.710270   63293 tuned.go:417] getting recommended profile...\nI0420 17:50:58.833936   63293 tuned.go:258] recommended tuned profile openshift-node content unchanged\n
Apr 20 17:55:25.311 E ns/openshift-monitoring pod/node-exporter-9wwsv node/ip-10-0-146-23.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:52:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:52:39Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:52:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:53:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:53:09Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:53:22Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:53:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 20 17:55:25.363 E ns/openshift-multus pod/multus-k2sgn node/ip-10-0-146-23.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 20 17:55:25.397 E ns/openshift-machine-config-operator pod/machine-config-daemon-nnfwq node/ip-10-0-146-23.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:55:29.181 E ns/openshift-multus pod/multus-k2sgn node/ip-10-0-146-23.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:55:34.055 E ns/openshift-machine-config-operator pod/machine-config-daemon-nnfwq node/ip-10-0-146-23.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 20 17:55:56.245 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:56:10.899 E kube-apiserver failed contacting the API: Get https://api.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=40767&timeout=8m45s&timeoutSeconds=525&watch=true: dial tcp 52.23.86.206:6443: connect: connection refused
Apr 20 17:56:22.359 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-controller-manager container exited with code 255 (Error): utSeconds=311&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:22.109017       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/networks?allowWatchBookmarks=true&resourceVersion=44078&timeout=8m19s&timeoutSeconds=499&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:22.109971       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:6443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=39190&timeout=8m43s&timeoutSeconds=523&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:22.111085       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/ingresses?allowWatchBookmarks=true&resourceVersion=40364&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:22.112175       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/projects?allowWatchBookmarks=true&resourceVersion=40503&timeout=8m48s&timeoutSeconds=528&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:22.116164       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/proxies?allowWatchBookmarks=true&resourceVersion=40368&timeout=7m13s&timeoutSeconds=433&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0420 17:56:22.116589       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0420 17:56:22.116674       1 controllermanager.go:291] leaderelection lost\n
Apr 20 17:56:23.385 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=kube-scheduler container exited with code 255 (Error): ocalhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=42262&timeout=9m3s&timeoutSeconds=543&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:21.884958       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=39231&timeout=6m54s&timeoutSeconds=414&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:21.888337       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=43888&timeout=5m3s&timeoutSeconds=303&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:21.890552       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=44990&timeoutSeconds=375&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:21.894028       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=39230&timeout=8m38s&timeoutSeconds=518&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:21.897606       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=43988&timeout=8m1s&timeoutSeconds=481&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0420 17:56:22.423978       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0420 17:56:22.424012       1 server.go:257] leaderelection lost\n
Apr 20 17:56:33.624 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-controller-manager-recovery-controller container exited with code 1 (Error): ck openshift-kube-controller-manager/cert-recovery-controller-lock: configmaps "cert-recovery-controller-lock" is forbidden: User "system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nI0420 17:47:22.820140       1 leaderelection.go:252] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock\nI0420 17:47:22.820356       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"0a1f8b89-96a3-4fad-a446-36d32f84772c", APIVersion:"v1", ResourceVersion:"37645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' fdbda4b2-300b-4953-9e41-6003868e3b24 became leader\nI0420 17:47:22.826988       1 csrcontroller.go:98] Starting CSR controller\nI0420 17:47:22.827012       1 shared_informer.go:197] Waiting for caches to sync for CSRController\nI0420 17:47:22.830416       1 client_cert_rotation_controller.go:140] Starting CertRotationController - "CSRSigningCert"\nI0420 17:47:22.830435       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - "CSRSigningCert"\nI0420 17:47:22.927246       1 shared_informer.go:204] Caches are synced for CSRController \nI0420 17:47:22.927294       1 resourcesync_controller.go:218] Starting ResourceSyncController\nI0420 17:47:22.930635       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "CSRSigningCert"\nI0420 17:54:13.560880       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0420 17:54:13.561298       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\nI0420 17:54:13.561434       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0420 17:54:13.561450       1 csrcontroller.go:100] Shutting down CSR controller\nI0420 17:54:13.561459       1 csrcontroller.go:102] CSR controller shut down\n
Apr 20 17:56:33.624 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=cluster-policy-controller container exited with code 1 (Error): rollers\nI0420 17:48:14.074174       1 shared_informer.go:204] Caches are synced for resource quota \nW0420 17:50:22.571283       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 293; INTERNAL_ERROR") has prevented the request from succeeding\nW0420 17:53:38.766695       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 265; INTERNAL_ERROR") has prevented the request from succeeding\nW0420 17:53:38.766875       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 393; INTERNAL_ERROR") has prevented the request from succeeding\nW0420 17:53:38.767810       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 283; INTERNAL_ERROR") has prevented the request from succeeding\nW0420 17:53:38.769025       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 297; INTERNAL_ERROR") has prevented the request from succeeding\nW0420 17:53:38.769149       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 305; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 20 17:56:33.624 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 6835       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:54:06.917706       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:54:06.920732       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:54:06.921034       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:54:06.922148       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:54:06.922382       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:54:06.934658       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:54:06.935152       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:54:06.937172       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:54:06.937566       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:54:11.634207       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:54:11.635057       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0420 17:54:13.370627       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:54:13.371054       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 20 17:56:33.624 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-controller-manager container exited with code 2 (Error): enshift-operator-lifecycle-manager/packageserver-67df4f9bd7: packageserver-6ccdfc55b6, packageserver-67df4f9bd7, packageserver-7fc5cc888, packageserver-7df7db7fb7, packageserver-b5cd69f48, packageserver-574f75749f, packageserver-5f4f7b47db, packageserver-5745d6656c\nI0420 17:54:05.751720       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"dfd10d1a-71e9-4745-9480-d324b74d10ed", APIVersion:"apps/v1", ResourceVersion:"43731", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-67df4f9bd7 to 0\nI0420 17:54:05.752093       1 controller_utils.go:603] Controller packageserver-67df4f9bd7 deleting pod openshift-operator-lifecycle-manager/packageserver-67df4f9bd7-2z7m9\nI0420 17:54:05.784458       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-67df4f9bd7", UID:"7200bf8e-f5c0-4419-b644-5fbce35ef8b8", APIVersion:"apps/v1", ResourceVersion:"43980", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-67df4f9bd7-2z7m9\nI0420 17:54:08.977751       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/openshift-state-metrics: Operation cannot be fulfilled on deployments.apps "openshift-state-metrics": the object has been modified; please apply your changes to the latest version and try again\nI0420 17:54:09.369991       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/prometheus-operator: Operation cannot be fulfilled on deployments.apps "prometheus-operator": the object has been modified; please apply your changes to the latest version and try again\nI0420 17:54:11.166565       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/kube-state-metrics: Operation cannot be fulfilled on deployments.apps "kube-state-metrics": the object has been modified; please apply your changes to the latest version and try again\n
Apr 20 17:56:33.694 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:54:04.822748       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:54:04.824000       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:54:06.837556       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:54:06.837704       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:54:06.851534       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:54:06.851560       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:54:06.852061       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:54:06.852081       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:54:07.046484       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:54:07.046540       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:54:07.047655       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:54:07.047685       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:54:07.048910       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:54:07.048934       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:54:08.844400       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:54:08.844562       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:54:10.854571       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:54:10.854814       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0420 17:54:12.862848       1 certsync_controller.go:65] Syncing configmaps: []\nI0420 17:54:12.862874       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 20 17:56:33.694 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-scheduler container exited with code 2 (Error): ch node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0420 17:53:52.331088       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-794cf9c47d-xkd87: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0420 17:53:55.174578       1 scheduler.go:751] pod openshift-operator-lifecycle-manager/packageserver-7df7db7fb7-llb8g is bound successfully on node "ip-10-0-140-162.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0420 17:53:56.681262       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-794cf9c47d-xkd87: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0420 17:53:57.682363       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-7446659c9d-w2b7z: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0420 17:54:05.290633       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-794cf9c47d-xkd87: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0420 17:54:08.688238       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-7446659c9d-w2b7z: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0420 17:54:10.688616       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-7446659c9d-w2b7z: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Apr 20 17:56:33.763 E ns/openshift-controller-manager pod/controller-manager-8svkx node/ip-10-0-142-75.ec2.internal container=controller-manager container exited with code 1 (Error): dial tcp 172.30.0.1:443: connect: connection refused\nE0420 17:50:56.651098       1 reflector.go:320] github.com/openshift/openshift-controller-manager/pkg/unidling/controller/unidling_controller.go:199: Failed to watch *v1.Event: Get https://172.30.0.1:443/api/v1/events?allowWatchBookmarks=true&fieldSelector=reason%3DNeedPods&resourceVersion=34780&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0420 17:50:56.671271       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://172.30.0.1:443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=40985&timeout=8m53s&timeoutSeconds=533&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0420 17:53:38.770630       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 7; INTERNAL_ERROR") has prevented the request from succeeding\nW0420 17:53:38.773854       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 1; INTERNAL_ERROR") has prevented the request from succeeding\nW0420 17:53:38.774029       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 73; INTERNAL_ERROR") has prevented the request from succeeding\nW0420 17:53:38.774199       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 21; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 20 17:56:33.785 E ns/openshift-monitoring pod/node-exporter-5mwxl node/ip-10-0-142-75.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:53:20Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:53:33Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:53:35Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:53:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:53:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:54:03Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-20T17:54:05Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 20 17:56:33.803 E ns/openshift-cluster-node-tuning-operator pod/tuned-cdm4r node/ip-10-0-142-75.ec2.internal container=tuned container exited with code 143 (Error): 17:35:28,590 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-20 17:35:28,591 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-20 17:35:28,593 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-20 17:35:28,667 INFO     tuned.daemon.controller: starting controller\n2020-04-20 17:35:28,671 INFO     tuned.daemon.daemon: starting tuning\n2020-04-20 17:35:28,685 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-20 17:35:28,685 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-20 17:35:28,690 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-20 17:35:28,692 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-20 17:35:28,694 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-20 17:35:28,880 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-20 17:35:28,900 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0420 17:46:53.883612   82749 tuned.go:513] profile "ip-10-0-142-75.ec2.internal" changed, tuned profile requested: openshift-node\nI0420 17:46:53.968271   82749 tuned.go:513] profile "ip-10-0-142-75.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0420 17:46:54.165705   82749 tuned.go:417] getting recommended profile...\nI0420 17:46:54.322761   82749 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0420 17:50:58.751049   82749 tuned.go:554] tuned "rendered" changed\nI0420 17:50:58.751398   82749 tuned.go:224] extracting tuned profiles\nI0420 17:50:58.751454   82749 tuned.go:417] getting recommended profile...\nI0420 17:50:59.418680   82749 tuned.go:258] recommended tuned profile openshift-control-plane content unchanged\n
Apr 20 17:56:33.822 E ns/openshift-sdn pod/sdn-controller-txzms node/ip-10-0-142-75.ec2.internal container=sdn-controller container exited with code 2 (Error): 7:24.170976       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0420 17:48:22.654441       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"6b512fc5-60bd-4cb8-b8e9-84593317722d", ResourceVersion:"38106", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722999276, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-142-75\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-20T17:48:22Z\",\"renewTime\":\"2020-04-20T17:48:22Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-142-75 became leader'\nI0420 17:48:22.654536       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0420 17:48:22.661699       1 master.go:51] Initializing SDN master\nI0420 17:48:22.682828       1 network_controller.go:61] Started OpenShift Network Controller\nE0420 17:50:56.726854       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://api-int.ci-op-ty1sl3vp-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=37516&timeout=5m16s&timeoutSeconds=316&watch=true: dial tcp 10.0.141.15:6443: connect: connection refused\n
Apr 20 17:56:33.877 E ns/openshift-multus pod/multus-pg66q node/ip-10-0-142-75.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 20 17:56:33.893 E ns/openshift-multus pod/multus-admission-controller-xvcdp node/ip-10-0-142-75.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 20 17:56:33.935 E ns/openshift-machine-config-operator pod/machine-config-daemon-7c2rn node/ip-10-0-142-75.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 20 17:56:33.950 E ns/openshift-machine-config-operator pod/machine-config-server-hlnmc node/ip-10-0-142-75.ec2.internal container=machine-config-server container exited with code 2 (Error): I0420 17:47:03.496007       1 start.go:38] Version: machine-config-daemon-4.4.0-202004200317-2-g60586bd3-dirty (60586bd33cec88718b868dcca195d6cae8f534b7)\nI0420 17:47:03.497515       1 api.go:51] Launching server on :22624\nI0420 17:47:03.497582       1 api.go:51] Launching server on :22623\n
Apr 20 17:56:36.741 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-apiserver container exited with code 1 (Error): ase apply your changes to the latest version and try again\nE0420 17:53:45.355639       1 available_controller.go:415] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0420 17:53:45.418438       1 available_controller.go:415] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0420 17:53:45.439036       1 available_controller.go:415] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0420 17:54:04.029971       1 controller.go:606] quota admission added evaluator for: servicemonitors.monitoring.coreos.com\nI0420 17:54:04.030721       1 controller.go:606] quota admission added evaluator for: servicemonitors.monitoring.coreos.com\nE0420 17:54:06.772925       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist\nI0420 17:54:06.772950       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.\nI0420 17:54:11.563292       1 controller.go:606] quota admission added evaluator for: daemonsets.apps\nI0420 17:54:13.466030       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-142-75.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0420 17:54:13.466251       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Apr 20 17:56:36.741 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): ationController - "LocalhostRecoveryServing"\nI0420 17:47:17.749161       1 client_cert_rotation_controller.go:140] Starting CertRotationController - "KubeControllerManagerClient"\nI0420 17:47:17.749168       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - "KubeControllerManagerClient"\nI0420 17:47:17.749177       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "KubeControllerManagerClient"\nI0420 17:54:13.561647       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0420 17:54:13.561779       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0420 17:54:13.561992       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0420 17:54:13.562010       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0420 17:54:13.562022       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0420 17:54:13.562033       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0420 17:54:13.562045       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0420 17:54:13.562056       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0420 17:54:13.562065       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0420 17:54:13.562100       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0420 17:54:13.562111       1 certrotationcontroller.go:556] Shutting down CertRotation\nI0420 17:54:13.562119       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0420 17:54:13.562126       1 cabundlesyncer.go:86] CA bundle controller shut down\n
Apr 20 17:56:36.741 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0420 17:32:19.132245       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 20 17:56:36.741 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 17:54:09.077185       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:54:09.077608       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0420 17:54:11.467070       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0420 17:54:11.467455       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 20 17:56:36.771 E ns/openshift-etcd pod/etcd-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-20 17:29:34.715730 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-142-75.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-142-75.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-20 17:29:34.716514 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-20 17:29:34.716849 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-142-75.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-142-75.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/20 17:29:34 grpc: addrConn.createTransport failed to connect to {https://10.0.142.75:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.142.75:9978: connect: connection refused". Reconnecting...\n2020-04-20 17:29:34.719958 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/20 17:29:35 grpc: addrConn.createTransport failed to connect to {https://10.0.142.75:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.142.75:9978: connect: connection refused". Reconnecting...\n
Apr 20 17:56:40.379 E ns/openshift-multus pod/multus-pg66q node/ip-10-0-142-75.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:56:44.523 E ns/openshift-machine-config-operator pod/machine-config-daemon-7c2rn node/ip-10-0-142-75.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 20 17:56:47.607 E ns/openshift-multus pod/multus-pg66q node/ip-10-0-142-75.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 20 17:56:48.565 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-231.ec2.internal node/ip-10-0-154-231.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): 6443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=39160&timeout=7m53s&timeoutSeconds=473&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:43.973160       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:6443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=39190&timeout=7m40s&timeoutSeconds=460&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:43.973320       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://localhost:6443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=42161&timeout=7m45s&timeoutSeconds=465&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:43.973419       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://localhost:6443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=44811&timeout=8m59s&timeoutSeconds=539&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:43.974243       1 reflector.go:307] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: Get https://localhost:6443/apis/image.openshift.io/v1/imagestreams?allowWatchBookmarks=true&resourceVersion=44670&timeout=6m22s&timeoutSeconds=382&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0420 17:56:43.975230       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=43888&timeout=6m15s&timeoutSeconds=375&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0420 17:56:48.225136       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0420 17:56:48.225201       1 policy_controller.go:94] leaderelection lost\n
Apr 20 17:56:50.272 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Apr 20 17:57:44.955 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:58:05.042 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 17:58:40.168 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-75.ec2.internal node/ip-10-0-142-75.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 20 18:00:18.169 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-162.ec2.internal node/ip-10-0-140-162.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n