ResultSUCCESS
Tests 4 failed / 21 succeeded
Started2020-04-18 10:10
Elapsed1h14m
Work namespaceci-op-mmn95xcw
Refs release-4.4:d0260133
127:c9dd11dc
podcbc32572-815c-11ea-9d1e-0a58ac10e230
repoopenshift/cluster-node-tuning-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 34m56s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 3s of 30m35s (0%):

Apr 18 10:54:48.170 E ns/e2e-k8s-service-lb-available-2466 svc/service-test Service stopped responding to GET requests on reused connections
Apr 18 10:54:49.169 - 1s    E ns/e2e-k8s-service-lb-available-2466 svc/service-test Service is not responding to GET requests on reused connections
Apr 18 10:54:50.203 I ns/e2e-k8s-service-lb-available-2466 svc/service-test Service started responding to GET requests on reused connections
Apr 18 10:56:41.170 E ns/e2e-k8s-service-lb-available-2466 svc/service-test Service stopped responding to GET requests on reused connections
Apr 18 10:56:41.202 I ns/e2e-k8s-service-lb-available-2466 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1587208598.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 34m26s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 2s of 34m26s (0%):

Apr 18 10:56:22.294 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Apr 18 10:56:23.294 E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Apr 18 10:56:23.346 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
				from junit_upgrade_1587208598.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 34m26s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1s of 34m25s (0%):

Apr 18 11:04:19.275 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: unexpected EOF
Apr 18 11:04:20.054 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 18 11:04:20.160 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1587208598.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 34m58s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
208 error level events were detected during this test run:

Apr 18 10:41:49.558 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 10:41:49.263190       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 10:41:49.265138       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 10:41:49.267030       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0418 10:41:49.267160       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0418 10:41:49.267640       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 10:42:14.759 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 18 10:42:54.975 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-controller-manager container exited with code 255 (Error): eflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/servicecatalogcontrollermanagers?allowWatchBookmarks=true&resourceVersion=8327&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:42:53.898915       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: Get https://localhost:6443/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=19488&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:42:53.900854       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machine.openshift.io/v1beta1/machinehealthchecks?allowWatchBookmarks=true&resourceVersion=8440&timeout=9m52s&timeoutSeconds=592&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0418 10:42:53.901416       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0418 10:42:53.901525       1 controllermanager.go:291] leaderelection lost\nI0418 10:42:53.915883       1 certificate_controller.go:130] Shutting down certificate controller "csrsigning"\nI0418 10:42:53.915886       1 garbagecollector.go:147] Shutting down garbage collector controller\nI0418 10:42:53.915873       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-134-195_10238387-8401-4aeb-9d1b-0dd5bf50b288 stopped leading\nI0418 10:42:53.915901       1 resource_quota_controller.go:290] Shutting down resource quota controller\nI0418 10:42:53.959208       1 graph_builder.go:310] stopped 133 of 133 monitors\nI0418 10:42:53.959227       1 graph_builder.go:311] GraphBuilder stopping\n
Apr 18 10:45:39.852 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-5bd758599f" has successfully progressed.
Apr 18 10:46:33.985 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-544f79fbd6-l576t node/ip-10-0-134-195.ec2.internal container=kube-scheduler-operator-container container exited with code 255 (Error): me":"2020-04-18T10:32:01Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-18T10:27:24Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0418 10:40:35.155206       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"eba85aac-c02a-4ee5-8348-04dc7721cc78", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-134-195.ec2.internal pods/openshift-kube-scheduler-ip-10-0-134-195.ec2.internal container=\"kube-scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0418 10:40:35.902242       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"eba85aac-c02a-4ee5-8348-04dc7721cc78", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-7 -n openshift-kube-scheduler:\ncause by changes in data.status\nI0418 10:40:38.312940       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"eba85aac-c02a-4ee5-8348-04dc7721cc78", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-7-ip-10-0-134-195.ec2.internal -n openshift-kube-scheduler because it was missing\nI0418 10:46:33.101010       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0418 10:46:33.101435       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0418 10:46:33.101463       1 builder.go:209] server exited\n
Apr 18 10:47:11.319 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 18 10:47:20.392 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 10:47:19.611280       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 10:47:19.614417       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 10:47:19.618284       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0418 10:47:19.619353       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 10:47:33.557 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 18 10:47:39.617 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 10:47:39.505961       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 10:47:39.507703       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 10:47:39.509353       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0418 10:47:39.509407       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0418 10:47:39.510264       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 10:48:06.867 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 18 10:48:54.484 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 10:48:53.958829       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 10:48:53.961834       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 10:48:53.961898       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0418 10:48:53.962447       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 10:48:57.089 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): 769&timeout=8m26s&timeoutSeconds=506&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:48:55.694586       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Infrastructure: Get https://localhost:6443/apis/config.openshift.io/v1/infrastructures?allowWatchBookmarks=true&resourceVersion=18303&timeout=9m20s&timeoutSeconds=560&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:48:55.696045       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?allowWatchBookmarks=true&resourceVersion=22769&timeout=6m49s&timeoutSeconds=409&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:48:55.696969       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=22769&timeout=7m40s&timeoutSeconds=460&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:48:55.698085       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=22956&timeout=7m27s&timeoutSeconds=447&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:48:55.699210       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=22956&timeout=5m14s&timeoutSeconds=314&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0418 10:48:56.221591       1 leaderelection.go:288] failed to renew lease openshift-kube-apiserver/cert-regeneration-controller-lock: timed out waiting for the condition\nF0418 10:48:56.221681       1 leaderelection.go:67] leaderelection lost\nI0418 10:48:56.224957       1 certrotationcontroller.go:556] Shutting down CertRotation\n
Apr 18 10:48:58.124 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): go:135: Failed to watch *v1.Namespace: Get https://localhost:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=20275&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:48:56.760090       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://localhost:6443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=18114&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:48:56.761277       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=23806&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:48:56.762424       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://localhost:6443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=23750&timeout=5m40s&timeoutSeconds=340&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:48:56.763625       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://localhost:6443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=20334&timeout=9m9s&timeoutSeconds=549&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:48:56.764798       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=20386&timeout=5m54s&timeoutSeconds=354&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0418 10:48:57.656644       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0418 10:48:57.656712       1 policy_controller.go:94] leaderelection lost\n
Apr 18 10:49:06.813 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-bb568b988-qdh4p node/ip-10-0-134-195.ec2.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): e":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-04-18T10:27:23Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-04-18-101143"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0418 10:36:24.604840       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"74cb05f6-8cb6-40b6-bf14-25f7088e14e5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0418 10:36:24.622135       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"74cb05f6-8cb6-40b6-bf14-25f7088e14e5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0418 10:49:06.227090       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0418 10:49:06.227157       1 leaderelection.go:66] leaderelection lost\n
Apr 18 10:49:08.536 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 10:49:08.310908       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 10:49:08.312955       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 10:49:08.315360       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0418 10:49:08.315487       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0418 10:49:08.316221       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 10:49:46.769 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 18 10:50:10.184 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 10:50:09.493953       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 10:50:09.499876       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 10:50:09.503374       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0418 10:50:09.503789       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0418 10:50:09.505762       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 10:50:10.853 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 18 10:50:26.235 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 10:50:26.151699       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 10:50:26.153740       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 10:50:26.155434       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0418 10:50:26.155584       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0418 10:50:26.155990       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 10:50:32.942 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 18 10:51:31.256 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Could not update clusterrolebinding "cluster-node-tuning-operator" (264 of 573)\n* Could not update clusterrolebinding "insights-operator" (379 of 573)\n* Could not update credentialsrequest "openshift-cloud-credential-operator/openshift-ingress" (238 of 573)\n* Could not update csisnapshotcontroller "cluster" (205 of 573)\n* Could not update openshiftcontrollermanager "cluster" (273 of 573)\n* Could not update prometheusrule "openshift-cluster-samples-operator/samples-operator-alerts" (282 of 573)\n* Could not update servicecatalogapiserver "cluster" (317 of 573)\n* Could not update servicecatalogcontrollermanager "cluster" (327 of 573)\n* deployment openshift-cloud-credential-operator/cloud-credential-operator is progressing NewReplicaSetAvailable: ReplicaSet "cloud-credential-operator-58bcd5b976" has successfully progressed.\n* deployment openshift-cluster-storage-operator/cluster-storage-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-storage-operator-5f576856f9" has successfully progressed.\n* deployment openshift-console/downloads is progressing NewReplicaSetAvailable: ReplicaSet "downloads-7c9f7bf8c4" has successfully progressed.\n* deployment openshift-image-registry/cluster-image-registry-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-image-registry-operator-5cf66f85d8" has successfully progressed.\n* deployment openshift-machine-api/cluster-autoscaler-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-autoscaler-operator-689c7f97df" has successfully progressed.\n* deployment openshift-monitoring/cluster-monitoring-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-monitoring-operator-568847fd49" has successfully progressed.\n* deployment openshift-operator-lifecycle-manager/olm-operator is progressing NewReplicaSetAvailable: ReplicaSet "olm-operator-8668994dbc" has successfully progressed.
Apr 18 10:51:49.334 E ns/openshift-monitoring pod/node-exporter-6kd6m node/ip-10-0-155-8.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 10:51:52.519 E ns/openshift-monitoring pod/kube-state-metrics-74449655b8-bbtbw node/ip-10-0-140-140.ec2.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 18 10:51:52.570 E ns/openshift-monitoring pod/openshift-state-metrics-b5ff6d486-2465q node/ip-10-0-140-140.ec2.internal container=openshift-state-metrics container exited with code 2 (Error): 
Apr 18 10:52:04.400 E ns/openshift-monitoring pod/prometheus-adapter-5bcbbdc5bb-l4kt2 node/ip-10-0-153-224.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0418 10:38:37.940643       1 adapter.go:93] successfully using in-cluster auth\nI0418 10:38:38.447536       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 18 10:52:09.598 E ns/openshift-monitoring pod/node-exporter-9m4s7 node/ip-10-0-140-140.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:52:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 10:52:14.799 E ns/openshift-monitoring pod/node-exporter-m8spf node/ip-10-0-132-22.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:27Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:42Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:57Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:52:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 10:52:15.763 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 18 10:52:19.507 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-153-224.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-18T10:52:14.008Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-18T10:52:14.017Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-18T10:52:14.017Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-18T10:52:14.018Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-18T10:52:14.018Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-18T10:52:14.018Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-18T10:52:14.018Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-18T10:52:14.018Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-18T10:52:14.018Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-18T10:52:14.018Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-18T10:52:14.019Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-18T10:52:14.019Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-18T10:52:14.019Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-18T10:52:14.019Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-18T10:52:14.019Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-18T10:52:14.019Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-18
Apr 18 10:52:22.674 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-667776b6b6-j7pcz node/ip-10-0-140-140.ec2.internal container=operator container exited with code 255 (Error): UTC m=+906.861289707\nI0418 10:51:30.807870       1 operator.go:147] Finished syncing operator at 46.25838ms\nI0418 10:51:30.813219       1 operator.go:145] Starting syncing operator at 2020-04-18 10:51:30.813214072 +0000 UTC m=+906.912898089\nI0418 10:51:31.068325       1 operator.go:147] Finished syncing operator at 255.101993ms\nI0418 10:51:36.258245       1 operator.go:145] Starting syncing operator at 2020-04-18 10:51:36.258235778 +0000 UTC m=+912.357919795\nI0418 10:51:36.297964       1 operator.go:147] Finished syncing operator at 39.719547ms\nI0418 10:51:36.309369       1 operator.go:145] Starting syncing operator at 2020-04-18 10:51:36.309360912 +0000 UTC m=+912.409045126\nI0418 10:51:36.377112       1 operator.go:147] Finished syncing operator at 67.743004ms\nI0418 10:52:15.043824       1 operator.go:145] Starting syncing operator at 2020-04-18 10:52:15.043813291 +0000 UTC m=+951.143497281\nI0418 10:52:15.074040       1 operator.go:147] Finished syncing operator at 30.2204ms\nI0418 10:52:15.074075       1 operator.go:145] Starting syncing operator at 2020-04-18 10:52:15.074070932 +0000 UTC m=+951.173754858\nI0418 10:52:15.106509       1 operator.go:147] Finished syncing operator at 32.431724ms\nI0418 10:52:15.106546       1 operator.go:145] Starting syncing operator at 2020-04-18 10:52:15.106542023 +0000 UTC m=+951.206225950\nI0418 10:52:15.143175       1 operator.go:147] Finished syncing operator at 36.627094ms\nI0418 10:52:15.143212       1 operator.go:145] Starting syncing operator at 2020-04-18 10:52:15.143207043 +0000 UTC m=+951.242890980\nI0418 10:52:15.455431       1 operator.go:147] Finished syncing operator at 312.211229ms\nI0418 10:52:21.535889       1 operator.go:145] Starting syncing operator at 2020-04-18 10:52:21.535878022 +0000 UTC m=+957.635562055\nI0418 10:52:21.560712       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0418 10:52:21.560784       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0418 10:52:21.560825       1 builder.go:210] server exited\n
Apr 18 10:52:23.104 E ns/openshift-monitoring pod/node-exporter-2sdlh node/ip-10-0-134-195.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:35Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:52:05Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:52:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 10:52:33.820 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-140.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-18T10:52:30.624Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-18T10:52:30.633Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-18T10:52:30.634Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-18T10:52:30.635Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-18T10:52:30.635Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-18T10:52:30.635Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-18T10:52:30.635Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-18T10:52:30.635Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-18T10:52:30.635Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-18T10:52:30.635Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-18T10:52:30.636Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-18T10:52:30.636Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-18T10:52:30.636Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-18T10:52:30.636Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-18T10:52:30.637Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-18T10:52:30.637Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-18
Apr 18 10:52:37.151 E ns/openshift-authentication-operator pod/authentication-operator-69d7ff8596-5k9dw node/ip-10-0-134-195.ec2.internal container=operator container exited with code 255 (Error): ype":"Progressing"},{"lastTransitionTime":"2020-04-18T10:41:35Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-18T10:27:22Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0418 10:52:20.317565       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"d14d7ec7-398f-4763-83b3-4f78bb06d922", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("Progressing: deployment's observed generation did not reach the expected generation")\nI0418 10:52:25.725451       1 status_controller.go:176] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-18T10:36:24Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-18T10:52:20Z","message":"Progressing: not all deployment replicas are ready","reason":"_OAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-04-18T10:41:35Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-18T10:27:22Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0418 10:52:25.786591       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"d14d7ec7-398f-4763-83b3-4f78bb06d922", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "Progressing: deployment's observed generation did not reach the expected generation" to "Progressing: not all deployment replicas are ready"\nI0418 10:52:36.283578       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0418 10:52:36.283749       1 builder.go:210] server exited\n
Apr 18 10:52:40.210 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 18 10:52:41.970 E ns/openshift-console-operator pod/console-operator-586b7db98-vgwqb node/ip-10-0-132-22.ec2.internal container=console-operator container exited with code 255 (Error): ote error: tls: bad certificate\nI0418 10:51:03.436445       1 log.go:172] http: TLS handshake error from 10.131.0.21:54442: remote error: tls: bad certificate\nI0418 10:51:03.547713       1 log.go:172] http: TLS handshake error from 10.129.2.11:35674: remote error: tls: bad certificate\nI0418 10:51:33.436888       1 log.go:172] http: TLS handshake error from 10.131.0.21:55016: remote error: tls: bad certificate\nI0418 10:51:33.549447       1 log.go:172] http: TLS handshake error from 10.129.2.11:35882: remote error: tls: bad certificate\nI0418 10:52:03.442115       1 log.go:172] http: TLS handshake error from 10.131.0.21:55768: remote error: tls: bad certificate\nI0418 10:52:03.550256       1 log.go:172] http: TLS handshake error from 10.129.2.11:36160: remote error: tls: bad certificate\nI0418 10:52:33.548764       1 log.go:172] http: TLS handshake error from 10.129.2.23:43142: remote error: tls: bad certificate\nI0418 10:52:40.841573       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0418 10:52:40.842218       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0418 10:52:40.842221       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0418 10:52:40.842268       1 controller.go:70] Shutting down Console\nI0418 10:52:40.842280       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0418 10:52:40.842299       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0418 10:52:40.842318       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0418 10:52:40.842332       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0418 10:52:40.842346       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0418 10:52:40.842376       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0418 10:52:40.845688       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nF0418 10:52:40.842741       1 builder.go:243] stopped\n
Apr 18 10:52:42.149 E ns/openshift-monitoring pod/node-exporter-xvddt node/ip-10-0-138-82.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:51:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:52:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:52:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T10:52:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 10:52:42.213 E ns/openshift-service-ca pod/service-ca-d8f5bd97c-rxb8w node/ip-10-0-134-195.ec2.internal container=service-ca-controller container exited with code 255 (Error): 
Apr 18 10:52:42.990 E ns/openshift-controller-manager pod/controller-manager-rlrc6 node/ip-10-0-132-22.ec2.internal container=controller-manager container exited with code 137 (Error): I0418 10:33:40.863813       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0418 10:33:40.865985       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-mmn95xcw/stable-initial@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0418 10:33:40.866010       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-mmn95xcw/stable-initial@sha256:7291b8d33c03cf2f563efef5bc757e362782144d67258bba957d61fdccf2a48d"\nI0418 10:33:40.866112       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0418 10:33:40.867266       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 18 10:53:09.581 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 18 10:53:30.895 E ns/openshift-marketplace pod/community-operators-67d9c5fbbc-vx9g8 node/ip-10-0-140-140.ec2.internal container=community-operators container exited with code 2 (Error): 
Apr 18 10:53:38.924 E ns/openshift-marketplace pod/certified-operators-6894b9548d-xmj7r node/ip-10-0-140-140.ec2.internal container=certified-operators container exited with code 2 (Error): 
Apr 18 10:53:41.284 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-864c9f6446-dhkbp node/ip-10-0-138-82.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Apr 18 10:53:56.767 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): -manager/secrets?allowWatchBookmarks=true&resourceVersion=23913&timeout=7m52s&timeoutSeconds=472&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.770216       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config/secrets?allowWatchBookmarks=true&resourceVersion=23913&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.771274       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=28872&timeout=9m41s&timeoutSeconds=581&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.772868       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=23913&timeout=7m24s&timeoutSeconds=444&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.773858       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=23913&timeout=5m1s&timeoutSeconds=301&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.778569       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=25232&timeoutSeconds=364&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0418 10:53:56.167984       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0418 10:53:56.168053       1 leaderelection.go:67] leaderelection lost\n
Apr 18 10:54:03.819 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): rs/factory.go:135: Failed to watch *v1beta1.Ingress: unknown (get ingresses.networking.k8s.io)\nE0418 10:53:58.663670       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: unknown (get cronjobs.batch)\nE0418 10:53:58.663735       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: unknown (get resourcequotas)\nE0418 10:53:58.663792       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: unknown (get networkpolicies.networking.k8s.io)\nE0418 10:53:58.663853       1 reflector.go:307] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to watch *v1.DeploymentConfig: unknown (get deploymentconfigs.apps.openshift.io)\nE0418 10:53:58.663911       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0418 10:53:58.663966       1 reflector.go:307] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: unknown (get routes.route.openshift.io)\nE0418 10:53:58.664023       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: unknown (get podtemplates)\nE0418 10:53:58.664082       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: unknown (get events.events.k8s.io)\nE0418 10:53:58.664142       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)\nE0418 10:53:58.664197       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nI0418 10:54:03.442065       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0418 10:54:03.442137       1 policy_controller.go:94] leaderelection lost\n
Apr 18 10:54:33.940 E ns/openshift-sdn pod/sdn-controller-lgmn2 node/ip-10-0-134-195.ec2.internal container=sdn-controller container exited with code 2 (Error): I0418 10:26:24.879994       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 18 10:54:38.212 E ns/openshift-sdn pod/sdn-qxnc5 node/ip-10-0-140-140.ec2.internal container=sdn container exited with code 255 (Error): 4.195:10257 for service "openshift-kube-controller-manager/kube-controller-manager:https"\nI0418 10:54:03.952971    2268 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:54:03.952993    2268 proxier.go:347] userspace syncProxyRules took 31.993265ms\nI0418 10:54:05.294485    2268 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.132.22:6443 10.0.134.195:6443 10.0.155.8:6443]\nI0418 10:54:05.450624    2268 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:54:05.450650    2268 proxier.go:347] userspace syncProxyRules took 27.622251ms\nI0418 10:54:30.042360    2268 pod.go:539] CNI_DEL openshift-ingress/router-default-7d9b5894b7-9sv9z\nI0418 10:54:31.534079    2268 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.19:6443 10.130.0.3:6443]\nI0418 10:54:31.534125    2268 roundrobin.go:217] Delete endpoint 10.129.0.6:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0418 10:54:31.534146    2268 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.19:8443 10.130.0.3:8443]\nI0418 10:54:31.534158    2268 roundrobin.go:217] Delete endpoint 10.129.0.6:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0418 10:54:31.687663    2268 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:54:31.687688    2268 proxier.go:347] userspace syncProxyRules took 27.678752ms\nI0418 10:54:36.918386    2268 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0418 10:54:37.958504    2268 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0418 10:54:37.958550    2268 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 18 10:54:43.561 E ns/openshift-sdn pod/sdn-controller-96d5t node/ip-10-0-132-22.ec2.internal container=sdn-controller container exited with code 2 (Error): 23")\nI0418 10:35:03.852660       1 subnets.go:149] Created HostSubnet ip-10-0-138-82.ec2.internal (host: "ip-10-0-138-82.ec2.internal", ip: "10.0.138.82", subnet: "10.128.2.0/23")\nI0418 10:35:05.576756       1 subnets.go:149] Created HostSubnet ip-10-0-153-224.ec2.internal (host: "ip-10-0-153-224.ec2.internal", ip: "10.0.153.224", subnet: "10.129.2.0/23")\nI0418 10:41:41.828389       1 vnids.go:115] Allocated netid 10869620 for namespace "e2e-kubernetes-api-available-9735"\nI0418 10:41:41.842291       1 vnids.go:115] Allocated netid 11915633 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-7370"\nI0418 10:41:41.852790       1 vnids.go:115] Allocated netid 4316239 for namespace "e2e-k8s-sig-apps-job-upgrade-9739"\nI0418 10:41:41.863595       1 vnids.go:115] Allocated netid 6185928 for namespace "e2e-frontend-ingress-available-2627"\nI0418 10:41:41.874181       1 vnids.go:115] Allocated netid 294465 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-8294"\nI0418 10:41:41.882975       1 vnids.go:115] Allocated netid 7218586 for namespace "e2e-k8s-service-lb-available-2466"\nI0418 10:41:41.892997       1 vnids.go:115] Allocated netid 15457843 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-8519"\nI0418 10:41:41.905421       1 vnids.go:115] Allocated netid 9303007 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-2579"\nI0418 10:41:41.917881       1 vnids.go:115] Allocated netid 12769735 for namespace "e2e-k8s-sig-apps-deployment-upgrade-8930"\nI0418 10:41:41.927265       1 vnids.go:115] Allocated netid 9482897 for namespace "e2e-openshift-api-available-8164"\nE0418 10:53:20.718890       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.NetNamespace: Get https://api-int.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/netnamespaces?allowWatchBookmarks=true&resourceVersion=23814&timeout=7m25s&timeoutSeconds=445&watch=true: dial tcp 10.0.145.128:6443: connect: connection refused\n
Apr 18 10:55:02.467 E ns/openshift-multus pod/multus-64x88 node/ip-10-0-138-82.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 18 10:55:02.664 E ns/openshift-multus pod/multus-admission-controller-28lg5 node/ip-10-0-132-22.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 18 10:55:32.557 E ns/openshift-sdn pod/sdn-vn4h8 node/ip-10-0-138-82.ec2.internal container=sdn container exited with code 255 (Error): openshift-insights/metrics:https" at 172.30.223.159:443/TCP\nI0418 10:55:20.294054   44806 service.go:363] Adding new service port "openshift-marketplace/community-operators:grpc" at 172.30.38.237:50051/TCP\nI0418 10:55:20.294071   44806 service.go:363] Adding new service port "openshift-ingress/router-default:http" at 172.30.206.188:80/TCP\nI0418 10:55:20.294086   44806 service.go:363] Adding new service port "openshift-ingress/router-default:https" at 172.30.206.188:443/TCP\nI0418 10:55:20.294412   44806 proxier.go:766] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0418 10:55:20.390203   44806 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:55:20.390233   44806 proxier.go:347] userspace syncProxyRules took 97.393445ms\nI0418 10:55:20.403355   44806 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:55:20.403377   44806 proxier.go:347] userspace syncProxyRules took 108.943645ms\nI0418 10:55:20.452495   44806 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:31435/tcp)\nI0418 10:55:20.452595   44806 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:30649/tcp)\nI0418 10:55:20.452684   44806 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-2466/service-test:" (:31228/tcp)\nI0418 10:55:20.487360   44806 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 31496\nI0418 10:55:20.608922   44806 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0418 10:55:20.608954   44806 cmd.go:173] openshift-sdn network plugin registering startup\nI0418 10:55:20.609076   44806 cmd.go:177] openshift-sdn network plugin ready\nI0418 10:55:32.096525   44806 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0418 10:55:32.096566   44806 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 18 10:55:42.642 E ns/openshift-multus pod/multus-admission-controller-q8qlw node/ip-10-0-155-8.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 18 10:55:48.675 E ns/openshift-multus pod/multus-l29n9 node/ip-10-0-155-8.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 18 10:56:05.135 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-7996dd769c-ch4xc node/ip-10-0-153-224.ec2.internal container=snapshot-controller container exited with code 255 (Error): 
Apr 18 10:56:12.924 E ns/openshift-sdn pod/sdn-8tnsn node/ip-10-0-155-8.ec2.internal container=sdn container exited with code 255 (Error): in registering startup\nI0418 10:55:31.010832   84872 cmd.go:177] openshift-sdn network plugin ready\nI0418 10:55:42.172673   84872 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-q8qlw\nI0418 10:55:48.848754   84872 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-66dvk got IP 10.130.0.67, ofport 68\nI0418 10:55:51.699524   84872 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.19:6443 10.129.0.73:6443 10.130.0.67:6443]\nI0418 10:55:51.699572   84872 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.19:8443 10.129.0.73:8443 10.130.0.67:8443]\nI0418 10:55:51.717787   84872 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.73:6443 10.130.0.67:6443]\nI0418 10:55:51.717819   84872 roundrobin.go:217] Delete endpoint 10.128.0.19:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0418 10:55:51.717837   84872 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.73:8443 10.130.0.67:8443]\nI0418 10:55:51.717850   84872 roundrobin.go:217] Delete endpoint 10.128.0.19:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0418 10:55:51.872965   84872 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:55:51.872985   84872 proxier.go:347] userspace syncProxyRules took 28.250414ms\nI0418 10:55:52.018633   84872 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:55:52.018727   84872 proxier.go:347] userspace syncProxyRules took 31.959589ms\nI0418 10:56:12.038963   84872 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0418 10:56:12.039005   84872 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 18 10:56:32.036 E ns/openshift-sdn pod/sdn-dcsj7 node/ip-10-0-132-22.ec2.internal container=sdn container exited with code 255 (Error): 382 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:55:42.240662   88382 proxier.go:347] userspace syncProxyRules took 36.158005ms\nI0418 10:55:51.699527   88382 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.19:6443 10.129.0.73:6443 10.130.0.67:6443]\nI0418 10:55:51.699657   88382 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.19:8443 10.129.0.73:8443 10.130.0.67:8443]\nI0418 10:55:51.717811   88382 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.73:8443 10.130.0.67:8443]\nI0418 10:55:51.717916   88382 roundrobin.go:217] Delete endpoint 10.128.0.19:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0418 10:55:51.717980   88382 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.73:6443 10.130.0.67:6443]\nI0418 10:55:51.718029   88382 roundrobin.go:217] Delete endpoint 10.128.0.19:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0418 10:55:51.877117   88382 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:55:51.877158   88382 proxier.go:347] userspace syncProxyRules took 30.978722ms\nI0418 10:55:52.034236   88382 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:55:52.034268   88382 proxier.go:347] userspace syncProxyRules took 40.089421ms\nI0418 10:56:22.209760   88382 proxier.go:368] userspace proxy: processing 0 service events\nI0418 10:56:22.209808   88382 proxier.go:347] userspace syncProxyRules took 32.232392ms\nI0418 10:56:31.132764   88382 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0418 10:56:31.132807   88382 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 18 10:56:40.508 E ns/openshift-multus pod/multus-s59tb node/ip-10-0-140-140.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 18 10:57:26.265 E ns/openshift-multus pod/multus-g5rzv node/ip-10-0-132-22.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 18 10:58:08.423 E ns/openshift-multus pod/multus-fmjd7 node/ip-10-0-153-224.ec2.internal container=kube-multus container exited with code 137 (Error): 
Apr 18 10:59:17.941 E ns/openshift-machine-config-operator pod/machine-config-operator-7d7bf746ff-qv5cv node/ip-10-0-134-195.ec2.internal container=machine-config-operator container exited with code 2 (Error): ", Name:"machine-config", UID:"e1e8d4f5-4fa7-4784-a454-81d42b568761", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-04-18-101143}]\nE0418 10:27:17.783958       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0418 10:27:17.809283       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0418 10:27:18.856330       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0418 10:27:23.093390       1 sync.go:61] [init mode] synced RenderConfig in 5.62951361s\nI0418 10:27:23.525628       1 sync.go:61] [init mode] synced MachineConfigPools in 431.956078ms\nI0418 10:27:39.637840       1 sync.go:61] [init mode] synced MachineConfigDaemon in 16.112141531s\nI0418 10:27:43.688277       1 sync.go:61] [init mode] synced MachineConfigController in 4.050397136s\nI0418 10:27:45.772850       1 sync.go:61] [init mode] synced MachineConfigServer in 2.084522588s\nI0418 10:30:56.781180       1 sync.go:61] [init mode] synced RequiredPools in 3m11.008284703s\nI0418 10:30:56.816959       1 sync.go:85] Initialization complete\nE0418 10:32:59.628254       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Apr 18 11:01:14.194 E ns/openshift-machine-config-operator pod/machine-config-daemon-6q488 node/ip-10-0-132-22.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 18 11:01:22.147 E ns/openshift-machine-config-operator pod/machine-config-daemon-4jdf2 node/ip-10-0-140-140.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 18 11:01:26.870 E ns/openshift-machine-config-operator pod/machine-config-daemon-zd2b5 node/ip-10-0-153-224.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 18 11:01:45.193 E ns/openshift-machine-config-operator pod/machine-config-daemon-xn4bn node/ip-10-0-155-8.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 18 11:01:54.528 E ns/openshift-machine-config-operator pod/machine-config-controller-74978d5c69-cb64j node/ip-10-0-134-195.ec2.internal container=machine-config-controller container exited with code 2 (Error): ode_controller.go:452] Pool worker: node ip-10-0-138-82.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0418 10:36:26.773969       1 node_controller.go:452] Pool worker: node ip-10-0-153-224.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-8a1415b6f776fb191d5530b073ad6144\nI0418 10:36:26.774084       1 node_controller.go:452] Pool worker: node ip-10-0-153-224.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-8a1415b6f776fb191d5530b073ad6144\nI0418 10:36:26.774096       1 node_controller.go:452] Pool worker: node ip-10-0-153-224.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0418 10:36:30.196014       1 node_controller.go:452] Pool worker: node ip-10-0-140-140.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-8a1415b6f776fb191d5530b073ad6144\nI0418 10:36:30.196133       1 node_controller.go:452] Pool worker: node ip-10-0-140-140.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-8a1415b6f776fb191d5530b073ad6144\nI0418 10:36:30.196177       1 node_controller.go:452] Pool worker: node ip-10-0-140-140.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0418 10:40:32.856158       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0418 10:40:32.926724       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0418 10:48:18.851901       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0418 10:48:18.911992       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0418 10:53:22.110724       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0418 10:53:22.175589       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\n
Apr 18 11:03:31.563 E ns/openshift-machine-config-operator pod/machine-config-server-pg7wh node/ip-10-0-155-8.ec2.internal container=machine-config-server container exited with code 2 (Error): I0418 10:28:29.772020       1 start.go:38] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0418 10:28:29.773338       1 api.go:51] Launching server on :22624\nI0418 10:28:29.775391       1 api.go:51] Launching server on :22623\n
Apr 18 11:03:40.624 E ns/openshift-kube-storage-version-migrator pod/migrator-56df98bfb4-hq5l2 node/ip-10-0-138-82.ec2.internal container=migrator container exited with code 2 (Error): I0418 10:53:20.647910       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Apr 18 11:03:41.741 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-588ddmrfl node/ip-10-0-155-8.ec2.internal container=operator container exited with code 255 (Error): 00 [Prometheus/2.15.2 10.131.0.30:43610]\nI0418 11:02:18.822109       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0418 11:02:18.822931       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0418 11:02:18.824884       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0418 11:02:18.824982       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0418 11:02:18.825915       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0418 11:02:18.828293       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0418 11:02:18.832952       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0418 11:02:18.844144       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0418 11:02:18.844863       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0418 11:02:28.199116       1 httplog.go:90] GET /metrics: (6.461643ms) 200 [Prometheus/2.15.2 10.129.2.23:50100]\nI0418 11:02:44.011861       1 httplog.go:90] GET /metrics: (5.861758ms) 200 [Prometheus/2.15.2 10.131.0.30:43610]\nI0418 11:02:58.198589       1 httplog.go:90] GET /metrics: (6.121541ms) 200 [Prometheus/2.15.2 10.129.2.23:50100]\nI0418 11:03:14.011800       1 httplog.go:90] GET /metrics: (5.829936ms) 200 [Prometheus/2.15.2 10.131.0.30:43610]\nI0418 11:03:28.197970       1 httplog.go:90] GET /metrics: (5.498319ms) 200 [Prometheus/2.15.2 10.129.2.23:50100]\nI0418 11:03:40.684747       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0418 11:03:40.685647       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-controller-manager\nI0418 11:03:40.685805       1 operator.go:227] Shutting down ServiceCatalogControllerManagerOperator\nF0418 11:03:40.685831       1 builder.go:243] stopped\n
Apr 18 11:03:43.828 E ns/openshift-service-ca-operator pod/service-ca-operator-575d844c7d-nfcbf node/ip-10-0-155-8.ec2.internal container=operator container exited with code 255 (Error): 
Apr 18 11:03:47.009 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-547df5f78d-tlnrx node/ip-10-0-155-8.ec2.internal container=kube-apiserver-operator container exited with code 255 (Error): oller.go:74] Shutting down LoggingSyncer ...\nI0418 11:03:45.822940       1 controller.go:331] Shutting down BoundSATokenSignerController\nI0418 11:03:45.822982       1 feature_upgradeable_controller.go:106] Shutting down FeatureUpgradeableController\nI0418 11:03:45.823022       1 status_controller.go:212] Shutting down StatusSyncer-kube-apiserver\nI0418 11:03:45.823130       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0418 11:03:45.823291       1 base_controller.go:49] Shutting down worker of  controller ...\nI0418 11:03:45.823342       1 base_controller.go:39] All  workers have been terminated\nI0418 11:03:45.823442       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0418 11:03:45.823486       1 base_controller.go:39] All PruneController workers have been terminated\nI0418 11:03:45.823534       1 base_controller.go:49] Shutting down worker of  controller ...\nI0418 11:03:45.823572       1 base_controller.go:39] All  workers have been terminated\nI0418 11:03:45.823615       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0418 11:03:45.823649       1 base_controller.go:39] All NodeController workers have been terminated\nI0418 11:03:45.823693       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0418 11:03:45.823764       1 base_controller.go:39] All RevisionController workers have been terminated\nI0418 11:03:45.823819       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0418 11:03:45.823860       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0418 11:03:45.823907       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0418 11:03:45.823946       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0418 11:03:45.824012       1 targetconfigcontroller.go:440] Shutting down TargetConfigController\nF0418 11:03:45.825271       1 builder.go:243] stopped\n
Apr 18 11:03:47.028 E ns/openshift-authentication-operator pod/authentication-operator-857f858c58-fgcbl node/ip-10-0-155-8.ec2.internal container=operator container exited with code 255 (Error): f78bb06d922", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "RouteHealthDegraded: failed to GET route: dial tcp: lookup oauth-openshift.apps.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: read udp 10.130.0.54:46718->172.30.0.10:53: i/o timeout"\nI0418 10:56:22.410877       1 status_controller.go:176] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-04-18T10:36:24Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-18T10:53:06Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-18T10:41:35Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-18T10:27:22Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0418 10:56:22.420225       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"d14d7ec7-398f-4763-83b3-4f78bb06d922", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: dial tcp: lookup oauth-openshift.apps.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: read udp 10.130.0.54:46718->172.30.0.10:53: i/o timeout" to ""\nI0418 11:03:45.403827       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0418 11:03:45.409084       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0418 11:03:45.412221       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nI0418 11:03:45.416307       1 unsupportedconfigoverrides_controller.go:162] Shutting down UnsupportedConfigOverridesController\nF0418 11:03:45.416958       1 builder.go:243] stopped\n
Apr 18 11:03:48.746 E ns/openshift-console-operator pod/console-operator-7d68fd6bb7-ccfq5 node/ip-10-0-155-8.ec2.internal container=console-operator container exited with code 255 (Error): ":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-04-18T10:53:25Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-04-18T10:53:25Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-04-18T10:33:44Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0418 10:53:25.581585       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"a57a8d8e-5131-4f36-844d-44ef5e503528", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Progressing changed from True to False (""),Available changed from False to True ("")\nW0418 11:03:44.768683       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 877; INTERNAL_ERROR") has prevented the request from succeeding\nI0418 11:03:47.057556       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0418 11:03:47.059743       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0418 11:03:47.060217       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0418 11:03:47.060275       1 controller.go:70] Shutting down Console\nI0418 11:03:47.060323       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0418 11:03:47.060374       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0418 11:03:47.060428       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0418 11:03:47.060473       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0418 11:03:47.060539       1 management_state_controller.go:112] Shutting down management-state-controller-console\nF0418 11:03:47.060587       1 builder.go:210] server exited\n
Apr 18 11:03:49.166 E ns/openshift-cluster-machine-approver pod/machine-approver-79b9c455c4-sctkh node/ip-10-0-155-8.ec2.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0418 10:51:32.122391       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0418 10:51:32.122415       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0418 10:51:32.122465       1 main.go:236] Starting Machine Approver\nI0418 10:51:32.226149       1 main.go:146] CSR csr-j5vvm added\nI0418 10:51:32.226317       1 main.go:149] CSR csr-j5vvm is already approved\nI0418 10:51:32.226377       1 main.go:146] CSR csr-kmqsf added\nI0418 10:51:32.226415       1 main.go:149] CSR csr-kmqsf is already approved\nI0418 10:51:32.226450       1 main.go:146] CSR csr-ql5mw added\nI0418 10:51:32.226514       1 main.go:149] CSR csr-ql5mw is already approved\nI0418 10:51:32.226551       1 main.go:146] CSR csr-t6qff added\nI0418 10:51:32.226584       1 main.go:149] CSR csr-t6qff is already approved\nI0418 10:51:32.226657       1 main.go:146] CSR csr-whbnx added\nI0418 10:51:32.226697       1 main.go:149] CSR csr-whbnx is already approved\nI0418 10:51:32.226733       1 main.go:146] CSR csr-2sgdt added\nI0418 10:51:32.226767       1 main.go:149] CSR csr-2sgdt is already approved\nI0418 10:51:32.226827       1 main.go:146] CSR csr-h9xvt added\nI0418 10:51:32.226866       1 main.go:149] CSR csr-h9xvt is already approved\nI0418 10:51:32.226909       1 main.go:146] CSR csr-jdmjz added\nI0418 10:51:32.226944       1 main.go:149] CSR csr-jdmjz is already approved\nI0418 10:51:32.227006       1 main.go:146] CSR csr-lhc8h added\nI0418 10:51:32.227042       1 main.go:149] CSR csr-lhc8h is already approved\nI0418 10:51:32.227087       1 main.go:146] CSR csr-mxplj added\nI0418 10:51:32.227154       1 main.go:149] CSR csr-mxplj is already approved\nI0418 10:51:32.227196       1 main.go:146] CSR csr-4clln added\nI0418 10:51:32.227230       1 main.go:149] CSR csr-4clln is already approved\nI0418 10:51:32.227265       1 main.go:146] CSR csr-h85gx added\nI0418 10:51:32.227330       1 main.go:149] CSR csr-h85gx is already approved\n
Apr 18 11:04:23.067 E ns/openshift-authentication pod/oauth-openshift-64748d564b-9jq9f node/ip-10-0-132-22.ec2.internal container=oauth-openshift container exited with code 255 (Error): Copying system trust bundle\nW0418 11:04:22.440054       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0418 11:04:22.440581       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nF0418 11:04:22.443005       1 cmd.go:49] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 18 11:06:17.633 E ns/openshift-cluster-node-tuning-operator pod/tuned-2n9l7 node/ip-10-0-138-82.ec2.internal container=tuned container exited with code 143 (Error):   tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-18 10:52:57,357 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-18 10:52:57,357 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-18 10:52:57,358 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-18 10:52:57,359 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-18 10:52:57,393 INFO     tuned.daemon.controller: starting controller\n2020-04-18 10:52:57,393 INFO     tuned.daemon.daemon: starting tuning\n2020-04-18 10:52:57,404 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-18 10:52:57,405 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-18 10:52:57,408 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-18 10:52:57,410 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-18 10:52:57,412 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-18 10:52:57,520 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-18 10:52:57,526 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0418 11:04:19.270962   39472 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0418 11:04:19.271375   39472 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0418 11:04:19.281750   39472 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:601: Failed to watch *v1.Profile: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/profiles?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dip-10-0-138-82.ec2.internal&resourceVersion=28604&timeoutSeconds=506&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 18 11:06:17.651 E ns/openshift-monitoring pod/node-exporter-9pkvf node/ip-10-0-138-82.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:03:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:03:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:03:59Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:04:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:04:14Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:04:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:04:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 11:06:17.666 E ns/openshift-multus pod/multus-65gdh node/ip-10-0-138-82.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 18 11:06:17.730 E ns/openshift-machine-config-operator pod/machine-config-daemon-2x462 node/ip-10-0-138-82.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 18 11:06:20.573 E ns/openshift-multus pod/multus-65gdh node/ip-10-0-138-82.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:06:26.293 E ns/openshift-machine-config-operator pod/machine-config-daemon-2x462 node/ip-10-0-138-82.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 18 11:06:27.336 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Apr 18 11:06:35.690 E ns/openshift-monitoring pod/node-exporter-7jp6k node/ip-10-0-155-8.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:03:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:03:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:03:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:03:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:03:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:04:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:04:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 11:06:35.716 E ns/openshift-multus pod/multus-r6n2r node/ip-10-0-155-8.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 18 11:06:35.743 E ns/openshift-machine-config-operator pod/machine-config-daemon-629p2 node/ip-10-0-155-8.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 18 11:06:35.756 E ns/openshift-machine-config-operator pod/machine-config-server-b8mdm node/ip-10-0-155-8.ec2.internal container=machine-config-server container exited with code 2 (Error): I0418 11:03:33.623905       1 start.go:38] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0418 11:03:33.625106       1 api.go:51] Launching server on :22624\nI0418 11:03:33.625187       1 api.go:51] Launching server on :22623\n
Apr 18 11:06:35.768 E ns/openshift-multus pod/multus-admission-controller-66dvk node/ip-10-0-155-8.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Apr 18 11:06:35.785 E ns/openshift-cluster-version pod/cluster-version-operator-688496b547-fp9fk node/ip-10-0-155-8.ec2.internal container=cluster-version-operator container exited with code 255 (Error): worker.go:621] Running sync for deployment "openshift-monitoring/cluster-monitoring-operator" (340 of 573)\nI0418 11:04:18.746732       1 request.go:538] Throttling request took 791.425941ms, request: GET:https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-node-tuning-operator\nI0418 11:04:18.796793       1 request.go:538] Throttling request took 794.780172ms, request: GET:https://127.0.0.1:6443/api/v1/namespaces/openshift-controller-manager-operator/services/metrics\nI0418 11:04:18.802666       1 sync_worker.go:634] Done syncing for service "openshift-controller-manager-operator/metrics" (275 of 573)\nI0418 11:04:18.802721       1 sync_worker.go:621] Running sync for configmap "openshift-controller-manager-operator/openshift-controller-manager-images" (276 of 573)\nI0418 11:04:18.847174       1 request.go:538] Throttling request took 796.243563ms, request: GET:https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/console-operator\nI0418 11:04:18.857460       1 sync_worker.go:634] Done syncing for clusterrole "console-operator" (347 of 573)\nI0418 11:04:18.857553       1 sync_worker.go:621] Running sync for role "openshift-console/console-operator" (348 of 573)\nI0418 11:04:18.896738       1 request.go:538] Throttling request took 793.84362ms, request: GET:https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/insights-operator\nI0418 11:04:18.976268       1 start.go:140] Shutting down due to terminated\nI0418 11:04:18.976396       1 start.go:188] Stepping down as leader\nI0418 11:04:18.976834       1 cvo.go:439] Started syncing cluster version "openshift-cluster-version/version" (2020-04-18 11:04:18.976824185 +0000 UTC m=+29.733435009)\nI0418 11:04:18.976976       1 cvo.go:468] Desired version from spec is v1.Update{Version:"", Image:"registry.svc.ci.openshift.org/ci-op-mmn95xcw/release@sha256:58c9a1d3dc30ea110c0c443c829ecad921974e4a42dc0ee3672b1b056fda9a12", Force:true}\nF0418 11:04:19.030160       1 start.go:148] Received shutdown signal twice, exiting\n
Apr 18 11:06:35.816 E ns/openshift-cluster-node-tuning-operator pod/tuned-2c724 node/ip-10-0-155-8.ec2.internal container=tuned container exited with code 143 (Error): g recommended profile...\nI0418 10:53:33.187085   80233 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0418 10:53:33.315192   80233 tuned.go:258] recommended tuned profile openshift-control-plane content changed\nI0418 10:53:34.156676   80233 tuned.go:417] getting recommended profile...\nI0418 10:53:34.285165   80233 tuned.go:444] active profile () != recommended profile (openshift-control-plane)\nI0418 10:53:34.285202   80233 tuned.go:461] tuned daemon profiles changed, forcing tuned daemon reload\nI0418 10:53:34.285244   80233 tuned.go:310] starting tuned...\n2020-04-18 10:53:34,420 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-18 10:53:34,433 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-18 10:53:34,434 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-18 10:53:34,434 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-18 10:53:34,436 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-18 10:53:34,480 INFO     tuned.daemon.controller: starting controller\n2020-04-18 10:53:34,480 INFO     tuned.daemon.daemon: starting tuning\n2020-04-18 10:53:34,492 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-18 10:53:34,493 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-18 10:53:34,497 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-18 10:53:34,498 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-18 10:53:34,500 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-18 10:53:34,656 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-18 10:53:34,677 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Apr 18 11:06:35.843 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:03:59.166823       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:03:59.166852       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:04:01.178394       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:04:01.178426       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:04:03.219007       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:04:03.219037       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:04:05.270472       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:04:05.270524       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:04:07.283265       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:04:07.283296       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:04:09.294456       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:04:09.294491       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:04:11.311873       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:04:11.311909       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:04:13.322335       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:04:13.322433       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:04:15.342109       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:04:15.342143       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:04:17.362894       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:04:17.362918       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 18 11:06:35.843 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-scheduler container exited with code 2 (Error): llowWatchBookmarks=true&resourceVersion=23501&timeout=7m38s&timeoutSeconds=458&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:14.512700       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=24941&timeout=9m26s&timeoutSeconds=566&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:14.513733       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=19509&timeout=7m37s&timeoutSeconds=457&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:14.514918       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=20399&timeout=9m34s&timeoutSeconds=574&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:14.517784       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=19512&timeout=8m49s&timeoutSeconds=529&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:15.101326       1 leaderelection.go:331] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: Get https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps/kube-scheduler?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:20.498880       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0418 10:51:20.498977       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\n
Apr 18 11:06:35.866 E ns/openshift-sdn pod/sdn-controller-lxghm node/ip-10-0-155-8.ec2.internal container=sdn-controller container exited with code 2 (Error): I0418 10:54:42.126387       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 18 11:06:35.890 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=cluster-policy-controller container exited with code 1 (Error): ources", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=catalogsources": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=catalogsources", couldn't start monitor for resource "snapshot.storage.k8s.io/v1beta1, Resource=volumesnapshots": unable to monitor quota for resource "snapshot.storage.k8s.io/v1beta1, Resource=volumesnapshots", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=podmonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=podmonitors", couldn't start monitor for resource "whereabouts.cni.cncf.io/v1alpha1, Resource=ippools": unable to monitor quota for resource "whereabouts.cni.cncf.io/v1alpha1, Resource=ippools", couldn't start monitor for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions": unable to monitor quota for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans"]\nI0418 10:54:29.524666       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0418 10:54:29.524679       1 policy_controller.go:147] Started Origin Controllers\nI0418 10:54:29.524701       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0418 10:54:29.524709       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0418 10:54:29.524745       1 resource_quota_monitor.go:303] QuotaMonitor running\nI0418 10:54:29.630234       1 shared_informer.go:204] Caches are synced for resource quota \nW0418 11:03:44.767513       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 461; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 18 11:06:35.890 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-controller-manager container exited with code 2 (Error):   1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0418 10:48:53.576369       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nE0418 10:50:55.967110       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:02.006377       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:03.504123       1 webhook.go:109] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:03.504167       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0418 10:51:07.844250       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:12.126368       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0418 10:51:20.593366       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Apr 18 11:06:35.890 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 1258       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:03:47.491672       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:03:52.914821       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:03:52.915178       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:03:57.514751       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:03:57.515089       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:04:02.934434       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:04:02.934713       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:04:07.527402       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:04:07.527784       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:04:12.959703       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:04:12.960129       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:04:17.537651       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:04:17.538133       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 18 11:06:35.890 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): W0418 10:48:54.488830       1 cmd.go:200] Using insecure, self-signed certificates\nI0418 10:48:54.489300       1 crypto.go:588] Generating new CA for cert-recovery-controller-signer@1587206934 cert, and key in /tmp/serving-cert-192800217/serving-signer.crt, /tmp/serving-cert-192800217/serving-signer.key\nI0418 10:48:55.922991       1 observer_polling.go:155] Starting file observer\nI0418 10:48:55.950271       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cert-recovery-controller-lock...\nE0418 10:51:05.905697       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nI0418 11:04:18.948761       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0418 11:04:18.948803       1 leaderelection.go:67] leaderelection lost\n
Apr 18 11:06:35.909 E ns/openshift-controller-manager pod/controller-manager-2tx9q node/ip-10-0-155-8.ec2.internal container=controller-manager container exited with code 1 (Error): I0418 10:52:52.015206       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0418 10:52:52.016963       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-mmn95xcw/stable@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0418 10:52:52.016987       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-mmn95xcw/stable@sha256:7291b8d33c03cf2f563efef5bc757e362782144d67258bba957d61fdccf2a48d"\nI0418 10:52:52.017122       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0418 10:52:52.017175       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 18 11:06:40.850 E ns/openshift-etcd pod/etcd-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-18 10:47:54.227194 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-155-8.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-155-8.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-18 10:47:54.227777 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-18 10:47:54.228203 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-155-8.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-155-8.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/18 10:47:54 grpc: addrConn.createTransport failed to connect to {https://10.0.155.8:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.155.8:9978: connect: connection refused". Reconnecting...\n2020-04-18 10:47:54.230431 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/18 10:47:55 grpc: addrConn.createTransport failed to connect to {https://10.0.155.8:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.155.8:9978: connect: connection refused". Reconnecting...\n
Apr 18 11:06:42.915 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-apiserver container exited with code 1 (Error):  failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0418 11:04:19.194914       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0418 11:04:19.195851       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0418 11:04:19.196200       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0418 11:04:19.196331       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0418 11:04:19.196439       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0418 11:04:19.196520       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0418 11:04:19.196618       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\n
Apr 18 11:06:42.915 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0418 10:49:45.908316       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 18 11:06:42.915 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0418 11:03:59.763472       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:03:59.763819       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0418 11:04:09.790163       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:04:09.790491       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 18 11:06:42.915 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0418 10:49:45.684214       1 cmd.go:200] Using insecure, self-signed certificates\nI0418 10:49:45.684658       1 crypto.go:588] Generating new CA for cert-regeneration-controller-signer@1587206985 cert, and key in /tmp/serving-cert-021579484/serving-signer.crt, /tmp/serving-cert-021579484/serving-signer.key\nI0418 10:49:46.152559       1 observer_polling.go:155] Starting file observer\nI0418 10:49:46.183650       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nE0418 10:50:56.106985       1 leaderelection.go:331] error retrieving resource lock openshift-kube-apiserver/cert-regeneration-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nI0418 11:04:19.150718       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0418 11:04:19.150751       1 leaderelection.go:67] leaderelection lost\n
Apr 18 11:06:44.387 E ns/openshift-multus pod/multus-r6n2r node/ip-10-0-155-8.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:06:45.028 E ns/openshift-monitoring pod/prometheus-adapter-764977c47c-nvh7g node/ip-10-0-140-140.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0418 10:52:09.870829       1 adapter.go:93] successfully using in-cluster auth\nI0418 10:52:10.816196       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 18 11:06:45.063 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-140.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/04/18 10:52:10 Watching directory: "/etc/alertmanager/config"\n
Apr 18 11:06:45.063 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-140.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/04/18 10:52:10 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/18 10:52:10 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/18 10:52:10 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/18 10:52:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/18 10:52:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/18 10:52:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/18 10:52:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/18 10:52:10 http.go:107: HTTPS: listening on [::]:9095\nI0418 10:52:10.831341       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 18 11:06:45.117 E ns/openshift-kube-storage-version-migrator pod/migrator-56df98bfb4-8grc4 node/ip-10-0-140-140.ec2.internal container=migrator container exited with code 2 (Error): 
Apr 18 11:06:45.155 E ns/openshift-monitoring pod/thanos-querier-bfb5dfbb-lclr4 node/ip-10-0-140-140.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/04/18 10:52:04 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/18 10:52:04 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/18 10:52:04 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/18 10:52:04 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/18 10:52:04 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/18 10:52:04 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/18 10:52:04 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/18 10:52:04 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0418 10:52:04.694048       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/18 10:52:04 http.go:107: HTTPS: listening on [::]:9091\n
Apr 18 11:06:46.087 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-140-140.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/04/18 11:03:57 Watching directory: "/etc/alertmanager/config"\n
Apr 18 11:06:46.087 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-140-140.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/04/18 11:03:57 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/18 11:03:57 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/18 11:03:57 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/18 11:03:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/18 11:03:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/18 11:03:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/18 11:03:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0418 11:03:57.651002       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/18 11:03:57 http.go:107: HTTPS: listening on [::]:9095\n
Apr 18 11:06:46.129 E ns/openshift-monitoring pod/telemeter-client-68f969c56d-mn67n node/ip-10-0-140-140.ec2.internal container=reload container exited with code 2 (Error): 
Apr 18 11:06:46.129 E ns/openshift-monitoring pod/telemeter-client-68f969c56d-mn67n node/ip-10-0-140-140.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Apr 18 11:06:46.222 E ns/openshift-monitoring pod/kube-state-metrics-f86fd956c-ft8wx node/ip-10-0-140-140.ec2.internal container=kube-state-metrics container exited with code 2 (Error): 
Apr 18 11:06:47.488 E ns/openshift-multus pod/multus-r6n2r node/ip-10-0-155-8.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:06:51.521 E ns/openshift-machine-config-operator pod/machine-config-daemon-629p2 node/ip-10-0-155-8.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 18 11:06:52.520 E ns/openshift-multus pod/multus-r6n2r node/ip-10-0-155-8.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:06:53.115 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady::StaticPods_Error: StaticPodsDegraded: nodes/ip-10-0-155-8.ec2.internal pods/kube-apiserver-ip-10-0-155-8.ec2.internal container="kube-apiserver" is not ready\nStaticPodsDegraded: nodes/ip-10-0-155-8.ec2.internal pods/kube-apiserver-ip-10-0-155-8.ec2.internal container="kube-apiserver" is terminated: "Error" - " failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\". Reconnecting...\nW0418 11:04:19.194914       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\". Reconnecting...\nW0418 11:04:19.195851       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\". Reconnecting...\nW0418 11:04:19.196200       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\". Reconnecting...\nW0418 11:04:19.196331       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\". Reconnecting...\nW0418 11:04:19.196439       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\". Reconnecting...\nW0418 11:04:19.196520       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\". Reconnecting...\nW0418 11:04:19.196618       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\". Reconnecting...\n"\nStaticPodsDegraded: nodes/ip-10-0-155-8.ec2.internal pods/kube-apiserver-ip-10-0-155-8.ec2.internal container="kube-apiserver-cert-regeneration-controller" is not ready\nStaticPodsDegraded: nodes/ip-10-0-155-8.ec2.internal pods/kube-apiserver-ip-10-0-155-8.ec2.internal container="kube-apiserver-cert-regeneration-controller" is terminated: "Error" - "W0418 10:49:45.684214       1 cmd.go:200] Using insecure, self-signed certificates\nI0418 10:49:45.684658       1 crypto.go:588] Generating new CA for cert-regeneration-controller-signer@1587206985 cert, and key in /tmp/serving-cert-021579484/serving-signer.crt, /tmp/serving-cert-021579484/serving-signer.key\nI0418 10:49:46.152559       1 observer_polling.go:155] Starting file observer\nI0418 10:49:46.183650       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nE0418 10:50:56.106985       1 leaderelection.go:331] error retrieving resource lock openshift-kube-apiserver/cert-regeneration-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nI0418 11:04:19.150718       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0418 11:04:19.150751       1 leaderelection.go:67] leaderelection lost\n"\nStaticPodsDegraded: nodes/ip-10-0-155-8.ec2.internal pods/kube-apiserver-ip-10-0-155-8.ec2.internal container="kube-apiserver-cert-syncer" is not ready\nStaticPodsDegraded: nodes/ip-10-0-155-8.ec2.internal pods/kube-apiserver-ip-10-0-155-8.ec2.internal container="kube-apiserver-cert-syncer" is terminated: "Error" - "ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0418 11:03:59.763472       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:03:59.763819       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0418 11:04:09.790163       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:04:09.790491       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n"\nStaticPodsDegraded: nodes/ip-10-0-155-8.ec2.internal pods/kube-apiserver-ip-10-0-155-8.ec2.internal container="kube-apiserver-insecure-readyz" is not ready\nStaticPodsDegraded: nodes/ip-10-0-155-8.ec2.internal pods/kube-apiserver-ip-10-0-155-8.ec2.internal container="kube-apiserver-insecure-readyz" is terminated: "Error" - "I0418 10:49:45.908316       1 readyz.go:103] Listening on 0.0.0.0:6080\n"\nNodeControllerDegraded: The master nodes not ready: node "ip-10-0-155-8.ec2.internal" not ready since 2020-04-18 11:06:35 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 18 11:06:53.143 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-155-8.ec2.internal" not ready since 2020-04-18 11:06:35 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nEtcdMembersDegraded: ip-10-0-155-8.ec2.internal members are unhealthy,  members are unknown
Apr 18 11:06:53.144 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-155-8.ec2.internal" not ready since 2020-04-18 11:06:35 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 18 11:06:53.156 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-155-8.ec2.internal" not ready since 2020-04-18 11:06:35 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 18 11:07:06.009 E ns/openshift-cluster-version pod/cluster-version-operator-688496b547-fp9fk node/ip-10-0-155-8.ec2.internal container=cluster-version-operator container exited with code 255 (Error): I0418 11:06:43.420439       1 start.go:19] ClusterVersionOperator v1.0.0-196-g23856901-dirty\nI0418 11:06:43.421939       1 merged_client_builder.go:122] Using in-cluster configuration\nI0418 11:06:43.461410       1 payload.go:210] Loading updatepayload from "/"\nI0418 11:06:46.384393       1 cvo.go:264] Verifying release authenticity: All release image digests must have GPG signatures from verifier-public-key-openshift-ci (D04761B116203B0C0859B61628B76E05B923888E: openshift-ci) - will check for signatures in containers/image format at https://storage.googleapis.com/openshift-release/test-1/signatures/openshift/release and from config maps in openshift-config-managed with label "release.openshift.io/verification-signatures"\nI0418 11:06:46.384785       1 leaderelection.go:241] attempting to acquire leader lease  openshift-cluster-version/version...\nE0418 11:06:46.405004       1 leaderelection.go:330] error retrieving resource lock openshift-cluster-version/version: Get https://127.0.0.1:6443/api/v1/namespaces/openshift-cluster-version/configmaps/version: dial tcp 127.0.0.1:6443: connect: connection refused\nI0418 11:06:46.405106       1 leaderelection.go:246] failed to acquire lease openshift-cluster-version/version\nI0418 11:06:59.574120       1 start.go:140] Shutting down due to terminated\nF0418 11:07:04.575294       1 start.go:146] Exiting\n
Apr 18 11:07:06.588 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-138-82.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-18T11:07:02.777Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-18T11:07:02.783Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-18T11:07:02.783Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-18T11:07:02.784Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-18T11:07:02.784Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-18T11:07:02.784Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-18T11:07:02.784Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-18T11:07:02.784Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-18T11:07:02.784Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-18T11:07:02.784Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-18T11:07:02.784Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-18T11:07:02.784Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-18T11:07:02.784Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-18T11:07:02.784Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-18T11:07:02.785Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-18T11:07:02.785Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-18
Apr 18 11:07:16.141 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-6975f675b7-xxnbb node/ip-10-0-132-22.ec2.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): or-operator", Name:"kube-storage-version-migrator-operator", UID:"74cb05f6-8cb6-40b6-bf14-25f7088e14e5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from True to False ("Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available")\nI0418 11:03:44.572658       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"74cb05f6-8cb6-40b6-bf14-25f7088e14e5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0418 11:06:43.753536       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"74cb05f6-8cb6-40b6-bf14-25f7088e14e5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from True to False ("Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available")\nI0418 11:06:46.507833       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"74cb05f6-8cb6-40b6-bf14-25f7088e14e5", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0418 11:07:15.017562       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0418 11:07:15.017617       1 leaderelection.go:66] leaderelection lost\n
Apr 18 11:07:39.664 E ns/openshift-console pod/console-66d89d7c75-9zd9p node/ip-10-0-132-22.ec2.internal container=console container exited with code 2 (Error): 2020-04-18T10:53:06Z cmd/main: cookies are secure!\n2020-04-18T10:53:06Z cmd/main: Binding to [::]:8443...\n2020-04-18T10:53:06Z cmd/main: using TLS\n2020-04-18T10:55:13Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n
Apr 18 11:07:48.085 E kube-apiserver failed contacting the API: Get https://api.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=38519&timeout=9m6s&timeoutSeconds=546&watch=true: dial tcp 34.225.104.28:6443: connect: connection refused
Apr 18 11:07:48.390 E kube-apiserver Kube API started failing: Get https://api.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: dial tcp 34.225.104.28:6443: connect: connection refused
Apr 18 11:09:23.975 E ns/openshift-monitoring pod/node-exporter-qsmz4 node/ip-10-0-140-140.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:06:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:06:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:14Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 11:09:24.019 E ns/openshift-cluster-node-tuning-operator pod/tuned-6pnpt node/ip-10-0-140-140.ec2.internal container=tuned container exited with code 143 (Error): t-node content changed\nI0418 10:54:01.469286   87816 tuned.go:417] getting recommended profile...\nI0418 10:54:01.589730   87816 tuned.go:444] active profile () != recommended profile (openshift-node)\nI0418 10:54:01.589773   87816 tuned.go:461] tuned daemon profiles changed, forcing tuned daemon reload\nI0418 10:54:01.589840   87816 tuned.go:310] starting tuned...\n2020-04-18 10:54:01,703 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-18 10:54:01,709 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-18 10:54:01,709 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-18 10:54:01,710 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-18 10:54:01,711 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-18 10:54:01,744 INFO     tuned.daemon.controller: starting controller\n2020-04-18 10:54:01,744 INFO     tuned.daemon.daemon: starting tuning\n2020-04-18 10:54:01,757 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-18 10:54:01,758 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-18 10:54:01,762 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-18 10:54:01,764 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-18 10:54:01,766 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-18 10:54:01,898 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-18 10:54:01,912 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0418 11:07:36.822295   87816 tuned.go:114] received signal: terminated\nI0418 11:07:36.822355   87816 tuned.go:351] sending TERM to PID 87906\n2020-04-18 11:07:36,824 INFO     tuned.daemon.controller: terminating controller\n2020-04-18 11:07:36,824 INFO     tuned.daemon.daemon: stopping tuning\n
Apr 18 11:09:24.130 E ns/openshift-multus pod/multus-9psss node/ip-10-0-140-140.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 18 11:09:24.174 E ns/openshift-machine-config-operator pod/machine-config-daemon-lcwgg node/ip-10-0-140-140.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 18 11:09:26.868 E ns/openshift-multus pod/multus-9psss node/ip-10-0-140-140.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:09:27.765 E ns/openshift-machine-config-operator pod/machine-config-daemon-lcwgg node/ip-10-0-140-140.ec2.internal container=machine-config-daemon container exited with code 255 (Error): I0418 11:09:26.306738    1877 start.go:74] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0418 11:09:26.333421    1877 start.go:84] Calling chroot("/rootfs")\nI0418 11:09:26.335703    1877 rpm-ostree.go:366] Running captured: rpm-ostree status --json\nI0418 11:09:26.895100    1877 daemon.go:209] Booted osImageURL: registry.svc.ci.openshift.org/ci-op-mmn95xcw/stable-initial@sha256:7fcc8c588786a2c5275da5fb2168a04d750945fbd86d4780a3ea25b10335eda2 (44.81.202004170830-0)\nF0418 11:09:26.895345    1877 start.go:128] Failed to initialize ClientBuilder: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined\n
Apr 18 11:09:31.700 E ns/openshift-multus pod/multus-9psss node/ip-10-0-140-140.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:10:03.409 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-7996dd769c-ch4xc node/ip-10-0-153-224.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Apr 18 11:10:06.550 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-apiserver container exited with code 1 (Error): , no relationship to this object was found in the node authorizer graph\nI0418 11:07:12.525046       1 node_authorizer.go:193] NODE DENY: node "ip-10-0-155-8.ec2.internal" cannot get configmap openshift-authentication/v4-0-config-system-service-ca, no relationship to this object was found in the node authorizer graph\nI0418 11:07:12.525428       1 node_authorizer.go:193] NODE DENY: node "ip-10-0-155-8.ec2.internal" cannot get configmap openshift-authentication/v4-0-config-system-trusted-ca-bundle, no relationship to this object was found in the node authorizer graph\nE0418 11:07:29.928554       1 available_controller.go:415] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0418 11:07:39.891263       1 trace.go:116] Trace[451513554]: "List" url:/api/v1/secrets,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.0.155.8 (started: 2020-04-18 11:07:39.26006335 +0000 UTC m=+1121.977094043) (total time: 631.165899ms):\nTrace[451513554]: [631.16294ms] [630.022766ms] Writing http response done count:1146\nI0418 11:07:47.712913       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-132-22.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0418 11:07:47.713133       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\nI0418 11:07:47.736051       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick\nI0418 11:07:47.736106       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0418 11:07:47.744962       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick\n
Apr 18 11:10:06.550 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0418 10:47:10.514350       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 18 11:10:06.550 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0418 11:07:30.013662       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:07:30.014095       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0418 11:07:40.023974       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:07:40.024312       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 18 11:10:06.550 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0418 10:48:57.798606       1 cmd.go:200] Using insecure, self-signed certificates\nI0418 10:48:57.799143       1 crypto.go:588] Generating new CA for cert-regeneration-controller-signer@1587206937 cert, and key in /tmp/serving-cert-051737813/serving-signer.crt, /tmp/serving-cert-051737813/serving-signer.key\nI0418 10:48:58.711663       1 observer_polling.go:155] Starting file observer\nI0418 10:49:03.073477       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nI0418 11:07:47.725052       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0418 11:07:47.725319       1 leaderelection.go:67] leaderelection lost\n
Apr 18 11:10:06.655 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-controller-manager-recovery-controller container exited with code 1 (Error): 978563       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s: dial tcp [::1]:6443: connect: connection refused\nI0418 10:54:32.104037       1 leaderelection.go:252] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock\nI0418 10:54:32.106847       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"58a0ea74-998c-47a6-b686-4cb4a2b3e70d", APIVersion:"v1", ResourceVersion:"29676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 37bcc971-cceb-47ad-b774-8f665fa727e1 became leader\nI0418 10:54:32.113256       1 csrcontroller.go:98] Starting CSR controller\nI0418 10:54:32.113281       1 shared_informer.go:197] Waiting for caches to sync for CSRController\nI0418 10:54:32.115679       1 client_cert_rotation_controller.go:140] Starting CertRotationController - "CSRSigningCert"\nI0418 10:54:32.115753       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - "CSRSigningCert"\nI0418 10:54:32.213542       1 shared_informer.go:204] Caches are synced for CSRController \nI0418 10:54:32.213592       1 resourcesync_controller.go:218] Starting ResourceSyncController\nI0418 10:54:32.216025       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "CSRSigningCert"\nI0418 11:07:47.743343       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0418 11:07:47.748873       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\nI0418 11:07:47.749034       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0418 11:07:47.749092       1 csrcontroller.go:100] Shutting down CSR controller\nI0418 11:07:47.749219       1 csrcontroller.go:102] CSR controller shut down\n
Apr 18 11:10:06.655 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=cluster-policy-controller container exited with code 1 (Error): operators.coreos.com/v2, Resource=catalogsourceconfigs": unable to monitor quota for resource "operators.coreos.com/v2, Resource=catalogsourceconfigs"]\nI0418 11:05:26.726489       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0418 11:05:26.726503       1 policy_controller.go:147] Started Origin Controllers\nI0418 11:05:26.726536       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0418 11:05:26.726941       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0418 11:05:26.726998       1 resource_quota_monitor.go:303] QuotaMonitor running\nI0418 11:05:26.835578       1 shared_informer.go:204] Caches are synced for resource quota \nW0418 11:07:12.417854       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 299; INTERNAL_ERROR") has prevented the request from succeeding\nW0418 11:07:12.418052       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 319; INTERNAL_ERROR") has prevented the request from succeeding\nW0418 11:07:12.418172       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 263; INTERNAL_ERROR") has prevented the request from succeeding\nW0418 11:07:12.418288       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 315; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 18 11:10:06.655 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 3233       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:07:14.204677       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:07:23.252784       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:07:23.253142       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:07:24.216568       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:07:24.216913       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:07:33.263157       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:07:33.263638       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:07:34.225142       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:07:34.225614       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:07:43.273700       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:07:43.274104       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:07:44.235931       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:07:44.236495       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 18 11:10:06.655 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-controller-manager container exited with code 2 (Error): on cannot be fulfilled on deployments.apps "prometheus-adapter": the object has been modified; please apply your changes to the latest version and try again\nI0418 11:07:45.480342       1 replica_set.go:597] Too many replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-74598d4f5, need 0, deleting 1\nI0418 11:07:45.480390       1 replica_set.go:225] Found 4 related ReplicaSets for ReplicaSet openshift-operator-lifecycle-manager/packageserver-74598d4f5: packageserver-5d7d6475b9, packageserver-74598d4f5, packageserver-7995588fb, packageserver-fd89bb87\nI0418 11:07:45.480518       1 controller_utils.go:603] Controller packageserver-74598d4f5 deleting pod openshift-operator-lifecycle-manager/packageserver-74598d4f5-6n27v\nI0418 11:07:45.480733       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"b9ffcbb7-6612-4748-b74d-8610466227ac", APIVersion:"apps/v1", ResourceVersion:"38365", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-74598d4f5 to 0\nI0418 11:07:45.495017       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\nI0418 11:07:45.496281       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-74598d4f5", UID:"0fa3ce86-49b9-4cad-948a-8a5264a2f612", APIVersion:"apps/v1", ResourceVersion:"38500", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-74598d4f5-6n27v\nI0418 11:07:45.891240       1 deployment_controller.go:484] Error syncing deployment openshift-monitoring/thanos-querier: Operation cannot be fulfilled on deployments.apps "thanos-querier": the object has been modified; please apply your changes to the latest version and try again\n
Apr 18 11:10:06.702 E ns/openshift-etcd pod/etcd-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-18 10:47:24.230573 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-132-22.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-132-22.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-18 10:47:24.231203 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-18 10:47:24.231595 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-132-22.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-132-22.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/18 10:47:24 grpc: addrConn.createTransport failed to connect to {https://10.0.132.22:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.132.22:9978: connect: connection refused". Reconnecting...\n2020-04-18 10:47:24.233360 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/18 10:47:25 grpc: addrConn.createTransport failed to connect to {https://10.0.132.22:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.132.22:9978: connect: connection refused". Reconnecting...\n
Apr 18 11:10:06.800 E ns/openshift-monitoring pod/node-exporter-qwqvx node/ip-10-0-132-22.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:31Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:07:46Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 11:10:06.832 E ns/openshift-controller-manager pod/controller-manager-cg4x8 node/ip-10-0-132-22.ec2.internal container=controller-manager container exited with code 1 (Error): 4&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0418 11:04:19.288261       1 reflector.go:320] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to watch *v1.DeploymentConfig: Get https://172.30.0.1:443/apis/apps.openshift.io/v1/deploymentconfigs?allowWatchBookmarks=true&resourceVersion=34318&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0418 11:04:19.288303       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=25330&timeout=6m28s&timeoutSeconds=388&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0418 11:04:19.293333       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get https://172.30.0.1:443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=35009&timeout=6m36s&timeoutSeconds=396&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0418 11:04:19.295550       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://172.30.0.1:443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=35182&timeout=6m55s&timeoutSeconds=415&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0418 11:07:12.415331       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 9; INTERNAL_ERROR") has prevented the request from succeeding\nW0418 11:07:12.415496       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 11; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 18 11:10:06.876 E ns/openshift-cluster-node-tuning-operator pod/tuned-95mpl node/ip-10-0-132-22.ec2.internal container=tuned container exited with code 143 (Error): g recommended profile...\nI0418 10:53:51.730828   86075 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0418 10:53:51.859373   86075 tuned.go:258] recommended tuned profile openshift-control-plane content changed\nI0418 10:53:52.703621   86075 tuned.go:417] getting recommended profile...\nI0418 10:53:52.844878   86075 tuned.go:444] active profile () != recommended profile (openshift-control-plane)\nI0418 10:53:52.844992   86075 tuned.go:461] tuned daemon profiles changed, forcing tuned daemon reload\nI0418 10:53:52.845069   86075 tuned.go:310] starting tuned...\n2020-04-18 10:53:52,967 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-18 10:53:52,977 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-18 10:53:52,978 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-18 10:53:52,978 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-18 10:53:52,979 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-18 10:53:53,020 INFO     tuned.daemon.controller: starting controller\n2020-04-18 10:53:53,020 INFO     tuned.daemon.daemon: starting tuning\n2020-04-18 10:53:53,031 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-18 10:53:53,032 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-18 10:53:53,035 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-18 10:53:53,037 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-18 10:53:53,039 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-18 10:53:53,172 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-18 10:53:53,189 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Apr 18 11:10:06.899 E ns/openshift-sdn pod/sdn-controller-9f7gs node/ip-10-0-132-22.ec2.internal container=sdn-controller container exited with code 2 (Error): enshift-network-controller", UID:"bc59233f-d74b-4534-afef-03ad7fe63064", ResourceVersion:"29940", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63722802382, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-132-22\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-18T10:26:22Z\",\"renewTime\":\"2020-04-18T10:54:49Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-132-22 became leader'\nI0418 10:54:49.389252       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0418 10:54:49.407257       1 master.go:51] Initializing SDN master\nI0418 10:54:49.420712       1 network_controller.go:61] Started OpenShift Network Controller\nE0418 11:07:48.072589       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.NetNamespace: Get https://api-int.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/netnamespaces?allowWatchBookmarks=true&resourceVersion=25139&timeout=9m16s&timeoutSeconds=556&watch=true: dial tcp 10.0.145.128:6443: connect: connection refused\nE0418 11:07:48.074521       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://api-int.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=38318&timeout=8m37s&timeoutSeconds=517&watch=true: dial tcp 10.0.145.128:6443: connect: connection refused\n
Apr 18 11:10:06.953 E ns/openshift-multus pod/multus-admission-controller-b2ns6 node/ip-10-0-132-22.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Apr 18 11:10:06.998 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:07:28.857673       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:07:28.857713       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:07:30.876964       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:07:30.877155       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:07:32.886266       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:07:32.886297       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:07:34.895558       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:07:34.895599       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:07:36.906013       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:07:36.906045       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:07:38.915487       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:07:38.915526       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:07:40.923584       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:07:40.923723       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:07:42.933007       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:07:42.933153       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:07:44.941124       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:07:44.941169       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:07:46.970303       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:07:46.970490       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 18 11:10:06.998 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=kube-scheduler container exited with code 2 (Error): c575cd55d-tb94j: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0418 11:07:30.820487       1 scheduler.go:751] pod openshift-operator-lifecycle-manager/packageserver-7995588fb-nwqx4 is bound successfully on node "ip-10-0-134-195.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0418 11:07:31.302771       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-757667ddd5-vdw5q: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0418 11:07:36.315348       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6c575cd55d-tb94j: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0418 11:07:37.343675       1 scheduler.go:751] pod openshift-operator-lifecycle-manager/packageserver-7995588fb-jxq7c is bound successfully on node "ip-10-0-155-8.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0418 11:07:42.302822       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-757667ddd5-vdw5q: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0418 11:07:45.303412       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6c575cd55d-tb94j: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0418 11:07:46.648740       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-757667ddd5-vdw5q: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Apr 18 11:10:07.022 E ns/openshift-multus pod/multus-8jxv2 node/ip-10-0-132-22.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 18 11:10:07.066 E ns/openshift-machine-config-operator pod/machine-config-daemon-cl52n node/ip-10-0-132-22.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 18 11:10:07.084 E ns/openshift-machine-config-operator pod/machine-config-server-8v9mk node/ip-10-0-132-22.ec2.internal container=machine-config-server container exited with code 2 (Error): I0418 11:03:48.928274       1 start.go:38] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0418 11:03:48.930162       1 api.go:51] Launching server on :22624\nI0418 11:03:48.930205       1 api.go:51] Launching server on :22623\n
Apr 18 11:10:11.178 E ns/openshift-monitoring pod/node-exporter-qwqvx node/ip-10-0-132-22.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:10:11.267 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady::StaticPods_Error: StaticPodsDegraded: nodes/ip-10-0-132-22.ec2.internal pods/etcd-ip-10-0-132-22.ec2.internal container="etcd" is not ready\nStaticPodsDegraded: nodes/ip-10-0-132-22.ec2.internal pods/etcd-ip-10-0-132-22.ec2.internal container="etcd" is terminated: "Completed" - ""\nStaticPodsDegraded: nodes/ip-10-0-132-22.ec2.internal pods/etcd-ip-10-0-132-22.ec2.internal container="etcd-metrics" is not ready\nStaticPodsDegraded: nodes/ip-10-0-132-22.ec2.internal pods/etcd-ip-10-0-132-22.ec2.internal container="etcd-metrics" is terminated: "Error" - "2020-04-18 10:47:24.230573 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-132-22.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-132-22.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-18 10:47:24.231203 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-18 10:47:24.231595 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-132-22.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-132-22.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/18 10:47:24 grpc: addrConn.createTransport failed to connect to {https://10.0.132.22:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.132.22:9978: connect: connection refused\". Reconnecting...\n2020-04-18 10:47:24.233360 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/18 10:47:25 grpc: addrConn.createTransport failed to connect to {https://10.0.132.22:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.132.22:9978: connect: connection refused\". Reconnecting...\n"\nStaticPodsDegraded: nodes/ip-10-0-132-22.ec2.internal pods/etcd-ip-10-0-132-22.ec2.internal container="etcdctl" is not ready\nStaticPodsDegraded: nodes/ip-10-0-132-22.ec2.internal pods/etcd-ip-10-0-132-22.ec2.internal container="etcdctl" is terminated: "Completed" - ""\nNodeControllerDegraded: The master nodes not ready: node "ip-10-0-132-22.ec2.internal" not ready since 2020-04-18 11:10:06 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network])\nEtcdMembersDegraded: ip-10-0-132-22.ec2.internal members are unhealthy,  members are unknown
Apr 18 11:10:12.369 E ns/openshift-multus pod/multus-8jxv2 node/ip-10-0-132-22.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:10:14.961 E ns/openshift-multus pod/multus-8jxv2 node/ip-10-0-132-22.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:10:18.082 E ns/openshift-machine-config-operator pod/machine-config-daemon-cl52n node/ip-10-0-132-22.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 18 11:10:19.109 E ns/openshift-multus pod/multus-8jxv2 node/ip-10-0-132-22.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:10:19.114 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-140-140.ec2.internal container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-18T11:10:16.407Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-18T11:10:16.430Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-18T11:10:16.439Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-18T11:10:16.441Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-18T11:10:16.441Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-18T11:10:16.441Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-18T11:10:16.441Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-18T11:10:16.443Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-18T11:10:16.443Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-18T11:10:16.443Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-18T11:10:16.443Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-18T11:10:16.444Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-18T11:10:16.444Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-18T11:10:16.444Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-18T11:10:16.444Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-18T11:10:16.446Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-18
Apr 18 11:10:36.279 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Apr 18 11:10:40.614 E ns/openshift-insights pod/insights-operator-8584b5cf97-zgcv8 node/ip-10-0-134-195.ec2.internal container=operator container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-insights_insights-operator-8584b5cf97-zgcv8_ba1f1b83-2077-4ddd-b32e-a014e59f0875/operator/0.log": lstat /var/log/pods/openshift-insights_insights-operator-8584b5cf97-zgcv8_ba1f1b83-2077-4ddd-b32e-a014e59f0875/operator/0.log: no such file or directory
Apr 18 11:10:40.660 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-547df5f78d-fh2cl node/ip-10-0-134-195.ec2.internal container=kube-apiserver-operator container exited with code 255 (Error): lient"\nI0418 11:10:38.615453       1 certrotationcontroller.go:556] Shutting down CertRotation\nI0418 11:10:38.615492       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0418 11:10:38.615515       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0418 11:10:38.615524       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0418 11:10:38.615541       1 base_controller.go:74] Shutting down RevisionController ...\nI0418 11:10:38.615554       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0418 11:10:38.615567       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0418 11:10:38.615577       1 state_controller.go:171] Shutting down EncryptionStateController\nI0418 11:10:38.615602       1 termination_observer.go:154] Shutting down TerminationObserver\nI0418 11:10:38.618082       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0418 11:10:38.619023       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0418 11:10:38.619101       1 base_controller.go:74] Shutting down InstallerController ...\nI0418 11:10:38.619232       1 base_controller.go:74] Shutting down PruneController ...\nI0418 11:10:38.619317       1 status_controller.go:212] Shutting down StatusSyncer-kube-apiserver\nI0418 11:10:38.619376       1 certrotationtime_upgradeable.go:103] Shutting down CertRotationTimeUpgradeableController\nI0418 11:10:38.619430       1 base_controller.go:74] Shutting down  ...\nI0418 11:10:38.619526       1 base_controller.go:74] Shutting down NodeController ...\nI0418 11:10:38.619600       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0418 11:10:38.619689       1 base_controller.go:74] Shutting down  ...\nI0418 11:10:38.619722       1 feature_upgradeable_controller.go:106] Shutting down FeatureUpgradeableController\nF0418 11:10:38.620110       1 builder.go:243] stopped\nI0418 11:10:38.633508       1 targetconfigcontroller.go:440] Shutting down TargetConfigController\n
Apr 18 11:10:44.741 E ns/openshift-service-ca pod/service-ca-6f698c548-zccll node/ip-10-0-134-195.ec2.internal container=service-ca-controller container exited with code 255 (Error): 
Apr 18 11:11:03.792 E ns/openshift-console pod/console-66d89d7c75-9l8wz node/ip-10-0-134-195.ec2.internal container=console container exited with code 2 (Error): 2020-04-18T11:04:10Z cmd/main: cookies are secure!\n2020-04-18T11:04:12Z cmd/main: Binding to [::]:8443...\n2020-04-18T11:04:12Z cmd/main: using TLS\n2020-04-18T11:05:12Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Apr 18 11:11:08.049 E kube-apiserver failed contacting the API: Get https://api.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=41909&timeout=7m7s&timeoutSeconds=427&watch=true: dial tcp 34.225.185.94:6443: connect: connection refused
Apr 18 11:11:13.359 E kube-apiserver Kube API started failing: Get https://api.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 18 11:11:22.037 E ns/openshift-authentication pod/oauth-openshift-749c4d5f44-8v4mg node/ip-10-0-155-8.ec2.internal container=oauth-openshift container exited with code 255 (Error): Copying system trust bundle\nW0418 11:11:21.207730       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0418 11:11:21.208090       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nF0418 11:11:21.210259       1 cmd.go:49] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 18 11:11:24.049 E ns/openshift-authentication pod/oauth-openshift-749c4d5f44-8v4mg node/ip-10-0-155-8.ec2.internal container=oauth-openshift container exited with code 255 (Error): Copying system trust bundle\nW0418 11:11:23.476572       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nW0418 11:11:23.477104       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found\nF0418 11:11:23.478936       1 cmd.go:49] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 18 11:11:33.101 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 11:11:31.657665       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 11:11:31.663231       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 11:11:31.666486       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0418 11:11:31.666635       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0418 11:11:31.667376       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 11:11:52.177 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-22.ec2.internal node/ip-10-0-132-22.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 11:11:51.901189       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 11:11:51.902845       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 11:11:51.904545       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0418 11:11:51.904653       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0418 11:11:51.905063       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 11:12:01.420 E ns/openshift-marketplace pod/redhat-operators-64454857bc-v6frn node/ip-10-0-140-140.ec2.internal container=redhat-operators container exited with code 2 (Error): 
Apr 18 11:12:06.460 E ns/openshift-marketplace pod/redhat-marketplace-54c5dd48b9-ms6np node/ip-10-0-140-140.ec2.internal container=redhat-marketplace container exited with code 2 (Error): 
Apr 18 11:12:40.331 E ns/openshift-monitoring pod/node-exporter-t7fvk node/ip-10-0-153-224.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:09:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 11:12:40.355 E ns/openshift-cluster-node-tuning-operator pod/tuned-2r7ws node/ip-10-0-153-224.ec2.internal container=tuned container exited with code 143 (Error): , ignoring CPU energy performance bias\n2020-04-18 10:53:09,377 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-18 10:53:09,378 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-18 10:53:09,495 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-18 10:53:09,509 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0418 11:04:19.269016   54085 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0418 11:04:19.269401   54085 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0418 11:04:19.279762   54085 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:605: Failed to watch *v1.Tuned: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/tuneds?allowWatchBookmarks=true&fieldSelector=metadata.name%3Drendered&resourceVersion=28609&timeoutSeconds=427&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0418 11:04:20.732162   54085 tuned.go:513] profile "ip-10-0-153-224.ec2.internal" changed, tuned profile requested: openshift-node\nI0418 11:04:21.105083   54085 tuned.go:417] getting recommended profile...\nI0418 11:04:21.295614   54085 tuned.go:455] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\nI0418 11:04:21.762580   54085 tuned.go:554] tuned "rendered" changed\nI0418 11:04:21.762622   54085 tuned.go:224] extracting tuned profiles\nI0418 11:04:21.762635   54085 tuned.go:417] getting recommended profile...\nI0418 11:04:21.910068   54085 tuned.go:258] recommended tuned profile openshift-node content unchanged\nI0418 11:10:54.102330   54085 tuned.go:114] received signal: terminated\nI0418 11:10:54.102366   54085 tuned.go:351] sending TERM to PID 54116\n2020-04-18 11:10:54,112 INFO     tuned.daemon.controller: terminating controller\n2020-04-18 11:10:54,112 INFO     tuned.daemon.daemon: stopping tuning\n
Apr 18 11:12:40.426 E ns/openshift-machine-config-operator pod/machine-config-daemon-c7bdr node/ip-10-0-153-224.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 18 11:12:40.463 E ns/openshift-multus pod/multus-bsqt5 node/ip-10-0-153-224.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 18 11:12:43.445 E ns/openshift-multus pod/multus-bsqt5 node/ip-10-0-153-224.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:13:27.702 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:10:48.157055       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:10:48.157089       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:10:50.192001       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:10:50.192043       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:10:52.200324       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:10:52.200348       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:10:54.232782       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:10:54.232969       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:10:56.252224       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:10:56.252258       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:10:58.263893       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:10:58.263926       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:11:00.277378       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:11:00.277406       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:11:02.298605       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:11:02.298668       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:11:04.309691       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:11:04.309728       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0418 11:11:06.323300       1 certsync_controller.go:65] Syncing configmaps: []\nI0418 11:11:06.323335       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 18 11:13:27.702 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-scheduler container exited with code 2 (Error): 1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=28959&timeoutSeconds=363&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.777769       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=28779&timeout=8m14s&timeoutSeconds=494&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.779448       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=28184&timeout=8m17s&timeoutSeconds=497&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.780778       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=20981&timeout=9m42s&timeoutSeconds=582&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.781800       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=20984&timeout=6m1s&timeoutSeconds=361&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.782905       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=24467&timeout=5m5s&timeoutSeconds=305&watch=true: dial tcp [::1]:6443: connect: connection refused\n
Apr 18 11:13:27.759 E ns/openshift-etcd pod/etcd-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=etcd-metrics container exited with code 2 (Error): 2020-04-18 10:46:48.025231 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-134-195.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-134-195.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-04-18 10:46:48.026747 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-04-18 10:46:48.027417 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-134-195.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-134-195.ec2.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \nWARNING: 2020/04/18 10:46:48 grpc: addrConn.createTransport failed to connect to {https://10.0.134.195:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.134.195:9978: connect: connection refused". Reconnecting...\n2020-04-18 10:46:48.032515 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/04/18 10:46:49 grpc: addrConn.createTransport failed to connect to {https://10.0.134.195:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.134.195:9978: connect: connection refused". Reconnecting...\n
Apr 18 11:13:27.796 E ns/openshift-machine-config-operator pod/machine-config-daemon-sbbbx node/ip-10-0-134-195.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Apr 18 11:13:27.833 E ns/openshift-multus pod/multus-admission-controller-pvz4r node/ip-10-0-134-195.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Apr 18 11:13:27.860 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=cluster-policy-controller container exited with code 1 (Error): I0418 10:54:45.150042       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 10:54:45.151725       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 10:54:45.153399       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0418 10:54:45.153476       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Apr 18 11:13:27.860 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-controller-manager container exited with code 2 (Error):      1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:45.119470       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:47.064211       1 webhook.go:109] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:47.064249       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0418 10:53:49.260777       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:53.293798       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0418 10:53:58.699482       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\nE0418 11:07:47.748331       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field\n
Apr 18 11:13:27.860 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error): 5344       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:10:57.385847       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:10:59.815572       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:10:59.815920       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:11:02.698516       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:11:02.698887       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:11:03.896786       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:11:03.897134       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:11:04.898654       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:11:04.899061       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:11:05.491755       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:11:05.492056       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0418 11:11:06.289483       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:11:06.289950       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 18 11:13:27.860 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): W0418 10:53:57.074328       1 cmd.go:200] Using insecure, self-signed certificates\nI0418 10:53:57.074694       1 crypto.go:588] Generating new CA for cert-recovery-controller-signer@1587207237 cert, and key in /tmp/serving-cert-801731556/serving-signer.crt, /tmp/serving-cert-801731556/serving-signer.key\nI0418 10:53:57.678877       1 observer_polling.go:155] Starting file observer\nI0418 10:53:59.298281       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cert-recovery-controller-lock...\nI0418 11:11:07.433329       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0418 11:11:07.433383       1 leaderelection.go:67] leaderelection lost\n
Apr 18 11:13:27.880 E ns/openshift-sdn pod/sdn-controller-xfr6z node/ip-10-0-134-195.ec2.internal container=sdn-controller container exited with code 2 (Error): I0418 10:54:37.819667       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 18 11:13:27.899 E ns/openshift-monitoring pod/node-exporter-rdfm4 node/ip-10-0-134-195.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:22Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:10:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:11:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-04-18T11:11:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Apr 18 11:13:27.934 E ns/openshift-multus pod/multus-n758v node/ip-10-0-134-195.ec2.internal container=kube-multus container exited with code 143 (Error): 
Apr 18 11:13:28.010 E ns/openshift-machine-config-operator pod/machine-config-server-cbxsr node/ip-10-0-134-195.ec2.internal container=machine-config-server container exited with code 2 (Error): I0418 11:04:12.194603       1 start.go:38] Version: machine-config-daemon-4.4.0-202004170331-2-ga8fa9e20-dirty (a8fa9e2075aebe0cf15202a05660f15fe686f4d2)\nI0418 11:04:12.201120       1 api.go:51] Launching server on :22624\nI0418 11:04:12.201294       1 api.go:51] Launching server on :22623\n
Apr 18 11:13:28.041 E ns/openshift-cluster-node-tuning-operator pod/tuned-dwrpg node/ip-10-0-134-195.ec2.internal container=tuned container exited with code 143 (Error): IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-18 10:53:19,675 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-18 10:53:19,679 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-18 10:53:19,842 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-18 10:53:19,855 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0418 11:04:19.278040   83243 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0418 11:04:19.304695   83243 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0418 11:04:19.607310   83243 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:605: Failed to watch *v1.Tuned: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/tuneds?allowWatchBookmarks=true&fieldSelector=metadata.name%3Drendered&resourceVersion=28609&timeoutSeconds=506&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0418 11:07:48.054146   83243 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0418 11:07:48.068216   83243 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0418 11:07:49.436400   83243 tuned.go:513] profile "ip-10-0-134-195.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0418 11:07:49.493266   83243 tuned.go:554] tuned "rendered" changed\nI0418 11:07:49.493291   83243 tuned.go:224] extracting tuned profiles\nI0418 11:07:49.493300   83243 tuned.go:417] getting recommended profile...\nI0418 11:07:50.132610   83243 tuned.go:417] getting recommended profile...\nI0418 11:07:51.365739   83243 tuned.go:455] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0418 11:07:51.488549   83243 tuned.go:258] recommended tuned profile openshift-control-plane content unchanged\n
Apr 18 11:13:28.092 E ns/openshift-controller-manager pod/controller-manager-d75ww node/ip-10-0-134-195.ec2.internal container=controller-manager container exited with code 1 (Error): I0418 10:53:00.548055       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0418 10:53:00.549615       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-mmn95xcw/stable@sha256:cf15be354f1cdaacdca513b710286b3b57e25b33f29496fe5ded94ce5d574703"\nI0418 10:53:00.549662       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-mmn95xcw/stable@sha256:7291b8d33c03cf2f563efef5bc757e362782144d67258bba957d61fdccf2a48d"\nI0418 10:53:00.549722       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0418 10:53:00.549753       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nE0418 11:07:56.088916       1 leaderelection.go:331] error retrieving resource lock openshift-controller-manager/openshift-master-controllers: Get https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager/configmaps/openshift-master-controllers: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 18 11:13:32.511 E ns/openshift-multus pod/multus-n758v node/ip-10-0-134-195.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:13:32.609 E ns/openshift-monitoring pod/node-exporter-rdfm4 node/ip-10-0-134-195.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:13:32.787 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-apiserver container exited with code 1 (Error): 1:07.574702       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0418 11:11:07.580890       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://localhost:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nE0418 11:11:07.644361       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0418 11:11:07.700710       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0418 11:11:07.704052       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0418 11:11:07.704077       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0418 11:11:07.704242       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0418 11:11:07.704266       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0418 11:11:07.706340       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0418 11:11:07.706368       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0418 11:11:07.706389       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0418 11:11:07.898491       1 genericapiserver.go:647] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-134-195.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0418 11:11:07.898523       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Apr 18 11:13:32.787 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0418 10:52:15.217215       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 18 11:13:32.787 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0418 11:10:50.609490       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:10:50.611002       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0418 11:11:00.618764       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0418 11:11:00.619107       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 18 11:13:32.787 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): lancer.go:26] syncing external loadbalancer hostnames: api.ci-op-mmn95xcw-1d6bd.origin-ci-int-aws.dev.rhcloud.com\nI0418 11:07:49.951459       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0418 11:11:07.588925       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0418 11:11:07.623234       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0418 11:11:07.623275       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0418 11:11:07.623292       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0418 11:11:07.623311       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0418 11:11:07.623329       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0418 11:11:07.623345       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0418 11:11:07.623362       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0418 11:11:07.623380       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0418 11:11:07.623450       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0418 11:11:07.623466       1 certrotationcontroller.go:556] Shutting down CertRotation\nI0418 11:11:07.623477       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0418 11:11:07.623488       1 cabundlesyncer.go:86] CA bundle controller shut down\nF0418 11:11:07.687135       1 leaderelection.go:67] leaderelection lost\n
Apr 18 11:13:34.823 E ns/openshift-multus pod/multus-n758v node/ip-10-0-134-195.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:13:36.942 E ns/openshift-multus pod/multus-n758v node/ip-10-0-134-195.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:13:39.018 E ns/openshift-machine-config-operator pod/machine-config-daemon-sbbbx node/ip-10-0-134-195.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Apr 18 11:13:39.044 E ns/openshift-multus pod/multus-n758v node/ip-10-0-134-195.ec2.internal invariant violation: pod may not transition Running->Pending
Apr 18 11:13:44.353 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-134-195.ec2.internal" not ready since 2020-04-18 11:13:27 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 18 11:13:44.356 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-134-195.ec2.internal" not ready since 2020-04-18 11:13:27 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 18 11:13:44.373 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-134-195.ec2.internal" not ready since 2020-04-18 11:13:27 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 18 11:14:01.614 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-195.ec2.internal node/ip-10-0-134-195.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 11:13:59.387170       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 11:13:59.397209       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 11:13:59.409467       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0418 11:13:59.404800       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0418 11:13:59.418188       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 11:14:27.809 E ns/openshift-cluster-node-tuning-operator pod/tuned-6pnpt node/ip-10-0-140-140.ec2.internal container=tuned container exited with code 2 (Error): E0418 11:09:26.747896    2392 tuned.go:696] cannot stat kubeconfig "/root/.kube/config"\nI0418 11:09:26.748391    2392 tuned.go:698] increased retry period to 20\nE0418 11:09:46.748782    2392 tuned.go:696] cannot stat kubeconfig "/root/.kube/config"\nI0418 11:09:46.748806    2392 tuned.go:698] increased retry period to 40\nE0418 11:10:26.749246    2392 tuned.go:696] cannot stat kubeconfig "/root/.kube/config"\nI0418 11:10:26.749275    2392 tuned.go:698] increased retry period to 80\nE0418 11:11:46.749592    2392 tuned.go:696] cannot stat kubeconfig "/root/.kube/config"\nI0418 11:11:46.749613    2392 tuned.go:698] increased retry period to 160\nE0418 11:14:26.749969    2392 tuned.go:696] cannot stat kubeconfig "/root/.kube/config"\nI0418 11:14:26.750014    2392 tuned.go:698] increased retry period to 320\nE0418 11:14:26.750025    2392 tuned.go:702] seen 5 errors in 300 seconds (limit was 610), terminating...\npanic: cannot stat kubeconfig "/root/.kube/config"\n\ngoroutine 1 [running]:\ngithub.com/openshift/cluster-node-tuning-operator/pkg/tuned.Run(0xc0000c97e2, 0x16d99e8, 0xd)\n	/go/src/github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:743 +0x1cf\nmain.main()\n	/go/src/github.com/openshift/cluster-node-tuning-operator/cmd/cluster-node-tuning-operator/main.go:60 +0x343\n
Apr 18 11:14:58.164 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 11:14:56.854875       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 11:14:56.855735       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 11:14:56.861736       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0418 11:14:56.862179       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0418 11:14:56.862484       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Apr 18 11:15:20.296 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-155-8.ec2.internal node/ip-10-0-155-8.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0418 11:15:19.557254       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0418 11:15:19.560320       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0418 11:15:19.560431       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0418 11:15:19.561068       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n