ResultSUCCESS
Tests 4 failed / 21 succeeded
Started2020-02-25 02:07
Elapsed1h29m
Work namespaceci-op-8c6j24f3
Refs openshift-4.5:5aff605c
29:b75ec248
pod79f2e947-5773-11ea-9151-0a58ac1030f5
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 40m31s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during upgrade for at least 2s:

Feb 25 03:01:29.653 E ns/e2e-k8s-service-lb-available-3589 svc/service-test Service stopped responding to GET requests on reused connections
Feb 25 03:01:29.683 I ns/e2e-k8s-service-lb-available-3589 svc/service-test Service started responding to GET requests on reused connections
Feb 25 03:01:55.653 E ns/e2e-k8s-service-lb-available-3589 svc/service-test Service stopped responding to GET requests on reused connections
Feb 25 03:01:55.688 I ns/e2e-k8s-service-lb-available-3589 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1582601249.xml

Filter through log files


Cluster upgrade Cluster frontend ingress remain available 40m1s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 4m22s:

Feb 25 02:56:27.969 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 25 02:56:27.969 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 25 02:56:28.016 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 25 02:56:28.024 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 25 02:57:23.935 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 25 02:57:23.935 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 25 02:57:23.986 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 25 02:57:23.989 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 25 02:59:19.935 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 25 02:59:20.010 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 25 03:01:04.935 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 25 03:01:05.001 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 25 03:01:28.935 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 25 03:01:29.081 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 25 03:09:19.187 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 25 03:09:19.846 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 25 03:09:19.926 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 25 03:09:19.934 - 8s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Feb 25 03:09:20.829 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 25 03:09:20.935 - 8s    E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Feb 25 03:09:28.834 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 25 03:09:30.808 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 25 03:09:38.834 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 25 03:09:38.934 - 14s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Feb 25 03:09:40.808 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 25 03:09:40.935 - 12s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Feb 25 03:09:53.905 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 25 03:09:53.921 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 25 03:12:08.935 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 25 03:12:08.935 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 25 03:12:08.935 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 25 03:12:08.935 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 25 03:12:09.935 - 29s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests on reused connections
Feb 25 03:12:09.935 - 29s   E ns/openshift-authentication route/oauth-openshift Route is not responding to GET requests over new connections
Feb 25 03:12:09.935 - 43s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Feb 25 03:12:09.935 - 43s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Feb 25 03:12:39.004 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 25 03:12:39.005 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
Feb 25 03:12:54.020 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 25 03:12:54.020 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 25 03:15:08.935 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 25 03:15:09.935 - 28s   E ns/openshift-console route/console Route is not responding to GET requests over new connections
Feb 25 03:15:13.955 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 25 03:15:14.935 - 23s   E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Feb 25 03:15:39.630 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 25 03:15:39.635 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 25 03:17:07.935 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests over new connections
Feb 25 03:17:07.935 E ns/openshift-authentication route/oauth-openshift Route stopped responding to GET requests on reused connections
Feb 25 03:17:08.013 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests over new connections
Feb 25 03:17:08.020 I ns/openshift-authentication route/oauth-openshift Route started responding to GET requests on reused connections
				from junit_upgrade_1582601249.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 40m1s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during upgrade for at least 51s:

Feb 25 03:09:33.890 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 25 03:09:33.910 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:09:49.890 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded
Feb 25 03:09:50.889 - 8s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:09:59.912 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:12:45.890 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 25 03:12:45.908 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:13:02.890 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 25 03:13:02.906 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:13:10.003 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:13:10.022 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:13:13.074 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:13:13.889 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:13:13.906 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:13:19.218 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:13:19.348 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:13:28.434 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:13:28.889 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:13:34.599 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:13:37.651 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:13:37.668 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:13:43.796 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:13:43.889 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:13:46.889 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:13:49.941 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:13:49.958 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:13:53.011 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:13:53.040 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:13:56.084 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:13:56.101 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:14:02.226 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:14:02.889 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:14:05.317 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:14:08.371 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:14:08.889 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:14:08.907 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:14:17.589 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:14:17.889 - 7s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:14:26.820 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:15:54.890 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 25 03:15:54.945 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:16:01.584 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:16:01.607 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:16:07.727 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:16:07.761 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:16:10.799 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:16:10.889 - 6s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:16:16.962 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:16:20.014 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:16:20.889 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:16:23.108 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:16:26.161 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:16:26.889 - 4s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:16:32.322 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:16:38.447 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:16:38.889 - 4s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:16:44.627 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:16:47.666 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:16:47.686 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:16:50.736 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:16:50.766 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:16:53.808 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:16:53.856 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:16:59.950 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:16:59.977 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:17:03.022 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:17:03.041 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:17:06.094 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:17:06.113 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:17:09.167 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:17:09.889 - 4s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:17:15.330 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:17:18.382 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:17:18.889 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:17:18.909 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:17:21.454 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:17:21.474 I openshift-apiserver OpenShift API started responding to GET requests
Feb 25 03:17:24.527 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 25 03:17:24.547 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1582601249.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 40m34s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
217 error level events were detected during this test run:

Feb 25 02:49:56.701 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-7c8b7785c7-5vbdn node/ip-10-0-133-131.ec2.internal container=kube-apiserver-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:50:37.372 E ns/openshift-machine-api pod/machine-api-operator-5488c55cc6-xzqh9 node/ip-10-0-133-131.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:50:37.372 E ns/openshift-machine-api pod/machine-api-operator-5488c55cc6-xzqh9 node/ip-10-0-133-131.ec2.internal container=machine-api-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:51:06.628 E kube-apiserver Kube API started failing: etcdserver: leader changed
Feb 25 02:51:11.592 E kube-apiserver Kube API is not responding to GET requests
Feb 25 02:52:37.138 E ns/openshift-machine-api pod/machine-api-controllers-7f6956d55d-6xt8k node/ip-10-0-156-20.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 25 02:54:16.377 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDaemonSet_UnavailablePod: APIServerDaemonSetDegraded: 1 of 3 requested instances are unavailable
Feb 25 02:54:19.599 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-55d86f9b6b-cbxjt node/ip-10-0-133-131.ec2.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): ): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: {"conditions":[{"type":"Degraded","status":"False","lastTransitionTime":"2020-02-25T02:30:55Z","reason":"AsExpected"},{"type":"Progressing","status":"False","lastTransitionTime":"2020-02-25T02:30:55Z","reason":"AsExpected"},{"type":"Available","status":"False","lastTransitionTime":"2020-02-25T02:30:55Z","reason":"_NoMigratorPod","message":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-02-25T02:30:55Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-02-25-020833"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0225 02:36:07.997458       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"40635c26-ec55-4960-92cb-0525ce3d8ff7", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0225 02:54:18.890987       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0225 02:54:18.891037       1 leaderelection.go:66] leaderelection lost\n
Feb 25 02:56:08.079 E ns/openshift-cluster-machine-approver pod/machine-approver-67567684d5-gtn9c node/ip-10-0-133-131.ec2.internal container=machine-approver-controller container exited with code 2 (Error): 0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0225 02:44:14.160382       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:openshift-cluster-machine-approver:machine-approver-sa" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:controller:machine-approver" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]\nE0225 02:56:05.021786       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=19594&timeoutSeconds=346&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
Feb 25 02:56:10.393 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-648695qtw node/ip-10-0-133-131.ec2.internal container=operator container exited with code 255 (Error): )\nI0225 02:56:06.067365       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 21580 (24899)\nI0225 02:56:06.067513       1 reflector.go:297] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.ServiceCatalogControllerManager ended with: too old resource version: 21686 (25027)\nI0225 02:56:06.067602       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 21180 (24890)\nI0225 02:56:06.127469       1 reflector.go:297] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 23836 (25507)\nI0225 02:56:06.142777       1 reflector.go:158] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134\nI0225 02:56:06.145577       1 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134\nI0225 02:56:07.067372       1 reflector.go:158] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:134\nI0225 02:56:07.067516       1 reflector.go:158] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0225 02:56:07.067561       1 reflector.go:158] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0225 02:56:07.067576       1 reflector.go:158] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:134\nI0225 02:56:07.068032       1 reflector.go:158] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134\nI0225 02:56:07.131221       1 reflector.go:158] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:134\nI0225 02:56:09.292766       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0225 02:56:09.292815       1 leaderelection.go:66] leaderelection lost\n
Feb 25 02:56:13.408 E ns/openshift-monitoring pod/node-exporter-g2wsw node/ip-10-0-155-150.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:56:13.408 E ns/openshift-monitoring pod/node-exporter-g2wsw node/ip-10-0-155-150.ec2.internal container=node-exporter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:56:15.893 E ns/openshift-authentication pod/oauth-openshift-759cf5885-lpz2g node/ip-10-0-133-131.ec2.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:56:18.416 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-c745f7b9f-t4gk5 node/ip-10-0-137-35.ec2.internal container=operator container exited with code 255 (Error): streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 02:52:01.637862       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 02:52:01.637896       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 02:52:01.638385       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 02:52:01.638996       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 02:52:01.639226       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 02:56:06.675823       1 operator.go:145] Starting syncing operator at 2020-02-25 02:56:06.675811615 +0000 UTC m=+1193.111406536\nI0225 02:56:06.909820       1 operator.go:147] Finished syncing operator at 233.996179ms\nI0225 02:56:06.909869       1 operator.go:145] Starting syncing operator at 2020-02-25 02:56:06.90986339 +0000 UTC m=+1193.345458120\nI0225 02:56:06.961334       1 operator.go:147] Finished syncing operator at 51.465562ms\nI0225 02:56:07.007257       1 operator.go:145] Starting syncing operator at 2020-02-25 02:56:07.007243159 +0000 UTC m=+1193.442838168\nI0225 02:56:07.047156       1 operator.go:147] Finished syncing operator at 39.907037ms\nI0225 02:56:07.047199       1 operator.go:145] Starting syncing operator at 2020-02-25 02:56:07.04719396 +0000 UTC m=+1193.482788851\nI0225 02:56:07.095361       1 operator.go:147] Finished syncing operator at 48.16116ms\nI0225 02:56:17.438271       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0225 02:56:17.438951       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0225 02:56:17.438971       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0225 02:56:17.438987       1 logging_controller.go:93] Shutting down LogLevelController\nF0225 02:56:17.439066       1 builder.go:243] stopped\n
Feb 25 02:56:20.108 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-12.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/25 02:42:56 Watching directory: "/etc/alertmanager/config"\n
Feb 25 02:56:20.108 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-12.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/25 02:42:58 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/25 02:42:58 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/25 02:42:58 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/25 02:42:58 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/25 02:42:58 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/25 02:42:58 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/25 02:42:58 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0225 02:42:58.381653       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/25 02:42:58 http.go:107: HTTPS: listening on [::]:9095\n
Feb 25 02:56:21.375 E ns/openshift-monitoring pod/node-exporter-flt6t node/ip-10-0-156-20.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:55:34Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:55:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:55:49Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:55:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:04Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:19Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 25 02:56:21.439 E ns/openshift-monitoring pod/kube-state-metrics-d8f6c474f-frnxb node/ip-10-0-155-150.ec2.internal container=kube-state-metrics container exited with code 2 (Error): 
Feb 25 02:56:24.047 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-784f89b7b5-gkq9x node/ip-10-0-133-131.ec2.internal container=manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:56:27.114 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-6d649b44b4-vkv2c node/ip-10-0-137-12.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 25 02:56:29.502 E ns/openshift-monitoring pod/prometheus-adapter-78b99dd476-zws77 node/ip-10-0-155-150.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0225 02:42:16.081029       1 adapter.go:93] successfully using in-cluster auth\nI0225 02:42:16.704112       1 secure_serving.go:116] Serving securely on [::]:6443\nW0225 02:46:10.392475       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Node ended with: too old resource version: 19011 (19588)\n
Feb 25 02:56:32.946 E ns/openshift-monitoring pod/thanos-querier-d79c46f8-l4sth node/ip-10-0-155-150.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/25 02:43:50 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/25 02:43:50 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/25 02:43:50 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/25 02:43:50 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/25 02:43:50 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/25 02:43:50 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/25 02:43:50 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/25 02:43:50 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/25 02:43:50 http.go:107: HTTPS: listening on [::]:9091\nI0225 02:43:50.815453       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 25 02:56:33.249 E ns/openshift-monitoring pod/prometheus-adapter-78b99dd476-hjhlk node/ip-10-0-137-12.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0225 02:42:15.874892       1 adapter.go:93] successfully using in-cluster auth\nI0225 02:42:16.633048       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 25 02:56:41.684 E ns/openshift-monitoring pod/telemeter-client-8679d64f6f-cr4bd node/ip-10-0-155-150.ec2.internal container=reload container exited with code 2 (Error): 
Feb 25 02:56:41.684 E ns/openshift-monitoring pod/telemeter-client-8679d64f6f-cr4bd node/ip-10-0-155-150.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Feb 25 02:56:41.753 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-150.ec2.internal container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-25T02:56:39.736Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-25T02:56:39.742Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-25T02:56:39.742Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-25T02:56:39.743Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-25T02:56:39.743Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-25T02:56:39.743Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-25T02:56:39.743Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-25T02:56:39.743Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-25T02:56:39.743Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-25T02:56:39.743Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-25T02:56:39.743Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-25T02:56:39.743Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-25T02:56:39.743Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-25T02:56:39.743Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-25T02:56:39.743Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-25T02:56:39.744Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-25
Feb 25 02:56:46.455 E ns/openshift-service-ca-operator pod/service-ca-operator-7d947cbd5c-dgjc8 node/ip-10-0-133-131.ec2.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:56:48.267 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-12.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/25 02:56:27 Watching directory: "/etc/alertmanager/config"\n
Feb 25 02:56:48.267 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-12.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/25 02:56:28 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/25 02:56:28 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/25 02:56:28 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/25 02:56:28 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/25 02:56:28 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/25 02:56:28 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/25 02:56:28 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0225 02:56:28.487096       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/25 02:56:28 http.go:107: HTTPS: listening on [::]:9095\n
Feb 25 02:56:51.215 E ns/openshift-service-ca pod/service-ca-598888f7dd-htkx6 node/ip-10-0-142-81.ec2.internal container=service-ca-controller container exited with code 255 (Error): 
Feb 25 02:56:54.848 E ns/openshift-operator-lifecycle-manager pod/packageserver-74769d7dfd-mvwzf node/ip-10-0-133-131.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:57:00.801 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-137-35.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/25 02:56:56 Watching directory: "/etc/alertmanager/config"\n
Feb 25 02:57:00.801 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-137-35.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/25 02:56:57 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/25 02:56:57 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/25 02:56:57 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/25 02:56:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/25 02:56:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/25 02:56:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/25 02:56:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0225 02:56:57.133119       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/25 02:56:57 http.go:107: HTTPS: listening on [::]:9095\n
Feb 25 02:57:00.864 E ns/openshift-controller-manager pod/controller-manager-wrtc5 node/ip-10-0-156-20.ec2.internal container=controller-manager container exited with code 137 (Error): I0225 02:37:49.749474       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0225 02:37:49.751261       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-8c6j24f3/stable-initial@sha256:3afb6f5f3bccfe85823b7e2f14c112eae85cd1d3b3580c06b79b2a18887f1106"\nI0225 02:37:49.751338       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-8c6j24f3/stable-initial@sha256:912a6797fdba6c38c4c1b8a8f0bf9bd38896cfba0813ce623b151d09c9bbb499"\nI0225 02:37:49.751440       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0225 02:37:49.751447       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 25 02:57:03.676 E ns/openshift-monitoring pod/node-exporter-j5g8v node/ip-10-0-137-35.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:55:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:05Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:20Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 25 02:57:16.363 E ns/openshift-monitoring pod/thanos-querier-d79c46f8-5fblj node/ip-10-0-137-12.ec2.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:57:16.363 E ns/openshift-monitoring pod/thanos-querier-d79c46f8-5fblj node/ip-10-0-137-12.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:57:16.363 E ns/openshift-monitoring pod/thanos-querier-d79c46f8-5fblj node/ip-10-0-137-12.ec2.internal container=thanos-querier container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:57:16.363 E ns/openshift-monitoring pod/thanos-querier-d79c46f8-5fblj node/ip-10-0-137-12.ec2.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:57:18.847 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-155-150.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/25 02:43:39 Watching directory: "/etc/alertmanager/config"\n
Feb 25 02:57:18.847 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-155-150.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/25 02:43:39 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/25 02:43:39 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/25 02:43:39 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/25 02:43:39 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/25 02:43:39 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/25 02:43:39 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/25 02:43:39 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/25 02:43:39 http.go:107: HTTPS: listening on [::]:9095\nI0225 02:43:39.519249       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 25 02:57:19.348 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-12.ec2.internal container=prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-25T02:57:13.438Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-25T02:57:13.445Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-25T02:57:13.445Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-25T02:57:13.446Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-25T02:57:13.446Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-25T02:57:13.446Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-25T02:57:13.446Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-25T02:57:13.447Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-25T02:57:13.447Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-25T02:57:13.447Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-25T02:57:13.447Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-25T02:57:13.447Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-25T02:57:13.447Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-25T02:57:13.447Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-25T02:57:13.449Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-25T02:57:13.449Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-25
Feb 25 02:57:20.392 E ns/openshift-monitoring pod/node-exporter-qmt52 node/ip-10-0-142-81.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:55:46Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:55:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 25 02:57:28.379 E ns/openshift-monitoring pod/node-exporter-pz9wt node/ip-10-0-137-12.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:55:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T02:56:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 25 02:57:33.959 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-150.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-25T02:57:29.067Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-25T02:57:29.071Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-25T02:57:29.072Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-25T02:57:29.073Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-25T02:57:29.073Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-25T02:57:29.074Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-25T02:57:29.074Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-25
Feb 25 02:57:44.441 E ns/openshift-marketplace pod/redhat-operators-5db79c646f-kzrbb node/ip-10-0-137-12.ec2.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 02:57:46.145 E ns/openshift-console-operator pod/console-operator-6989fc7cf8-8dm44 node/ip-10-0-156-20.ec2.internal container=console-operator container exited with code 255 (Error): ror on the server ("unable to decode an event from the watch stream: stream error: stream ID 827; INTERNAL_ERROR") has prevented the request from succeeding\nW0225 02:52:02.874788       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 935; INTERNAL_ERROR") has prevented the request from succeeding\nW0225 02:52:02.874945       1 reflector.go:326] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 941; INTERNAL_ERROR") has prevented the request from succeeding\nW0225 02:52:37.297637       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 1075; INTERNAL_ERROR") has prevented the request from succeeding\nI0225 02:57:45.076385       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0225 02:57:45.077150       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0225 02:57:45.077210       1 controller.go:70] Shutting down Console\nI0225 02:57:45.077254       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0225 02:57:45.077299       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0225 02:57:45.077344       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0225 02:57:45.077394       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0225 02:57:45.077431       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0225 02:57:45.077461       1 status_controller.go:212] Shutting down StatusSyncer-console\nF0225 02:57:45.077491       1 builder.go:243] stopped\n
Feb 25 02:57:55.078 E ns/openshift-marketplace pod/community-operators-746b99c6b5-kwsmh node/ip-10-0-155-150.ec2.internal container=community-operators container exited with code 2 (Error): 
Feb 25 02:59:45.617 E ns/openshift-sdn pod/sdn-controller-2wp9v node/ip-10-0-156-20.ec2.internal container=sdn-controller container exited with code 2 (Error): .go:115] Allocated netid 1923119 for namespace "openshift"\nI0225 02:35:36.872284       1 vnids.go:115] Allocated netid 5554068 for namespace "openshift-node"\nI0225 02:35:46.787213       1 vnids.go:115] Allocated netid 1412023 for namespace "openshift-console"\nI0225 02:35:46.906640       1 vnids.go:115] Allocated netid 16497607 for namespace "openshift-console-operator"\nI0225 02:36:45.238637       1 vnids.go:115] Allocated netid 13784918 for namespace "openshift-ingress"\nI0225 02:46:58.627688       1 vnids.go:115] Allocated netid 5931208 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-5581"\nI0225 02:46:58.659552       1 vnids.go:115] Allocated netid 15359515 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-3195"\nI0225 02:46:58.685004       1 vnids.go:115] Allocated netid 10874907 for namespace "e2e-k8s-sig-apps-job-upgrade-9518"\nI0225 02:46:58.720293       1 vnids.go:115] Allocated netid 8736876 for namespace "e2e-frontend-ingress-available-9299"\nI0225 02:46:58.757151       1 vnids.go:115] Allocated netid 10743689 for namespace "e2e-k8s-service-lb-available-3589"\nI0225 02:46:58.787584       1 vnids.go:115] Allocated netid 14458435 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-9041"\nI0225 02:46:58.827856       1 vnids.go:115] Allocated netid 14545209 for namespace "e2e-control-plane-available-1618"\nI0225 02:46:58.864866       1 vnids.go:115] Allocated netid 483737 for namespace "e2e-k8s-sig-apps-deployment-upgrade-4079"\nI0225 02:46:58.894666       1 vnids.go:115] Allocated netid 3095128 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-2889"\nE0225 02:48:08.633403       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.NetNamespace: Get https://api-int.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/netnamespaces?allowWatchBookmarks=true&resourceVersion=20810&timeout=8m38s&timeoutSeconds=518&watch=true: dial tcp 10.0.152.238:6443: connect: connection refused\n
Feb 25 03:00:00.999 E ns/openshift-sdn pod/sdn-controller-mghtl node/ip-10-0-142-81.ec2.internal container=sdn-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:00:13.379 E ns/openshift-multus pod/multus-dhwvn node/ip-10-0-155-150.ec2.internal container=kube-multus container exited with code 137 (OOMKilled): 
Feb 25 03:00:13.571 E ns/openshift-multus pod/multus-admission-controller-k5f74 node/ip-10-0-133-131.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 25 03:00:50.857 E ns/openshift-multus pod/multus-r8f55 node/ip-10-0-137-12.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 25 03:01:19.177 E ns/openshift-sdn pod/sdn-xvpsn node/ip-10-0-137-35.ec2.internal container=sdn container exited with code 255 (Error): .0.4:6443 10.130.0.88:6443]\nI0225 03:00:23.623331    2741 roundrobin.go:217] Delete endpoint 10.129.0.2:6443 for service "openshift-multus/multus-admission-controller:"\nI0225 03:00:23.847872    2741 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:00:23.847896    2741 proxier.go:347] userspace syncProxyRules took 70.119039ms\nI0225 03:00:24.091114    2741 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:00:24.091141    2741 proxier.go:347] userspace syncProxyRules took 70.2583ms\nI0225 03:00:54.327726    2741 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:00:54.327753    2741 proxier.go:347] userspace syncProxyRules took 71.951234ms\nI0225 03:01:03.379887    2741 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.4:6443 10.129.0.73:6443 10.130.0.88:6443]\nI0225 03:01:03.404783    2741 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.73:6443 10.130.0.88:6443]\nI0225 03:01:03.404812    2741 roundrobin.go:217] Delete endpoint 10.128.0.4:6443 for service "openshift-multus/multus-admission-controller:"\nI0225 03:01:03.630747    2741 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:01:03.630769    2741 proxier.go:347] userspace syncProxyRules took 71.004149ms\nI0225 03:01:03.867992    2741 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:01:03.868021    2741 proxier.go:347] userspace syncProxyRules took 70.263576ms\nI0225 03:01:17.349990    2741 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0225 03:01:18.770981    2741 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0225 03:01:18.771022    2741 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 25 03:01:29.499 E ns/openshift-multus pod/multus-xshfr node/ip-10-0-156-20.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 25 03:01:34.306 E ns/openshift-multus pod/multus-admission-controller-dwj7s node/ip-10-0-142-81.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 25 03:01:44.974 E ns/openshift-sdn pod/sdn-mmfj4 node/ip-10-0-137-12.ec2.internal container=sdn container exited with code 255 (Error):  to [10.129.0.73:6443 10.130.0.88:6443]\nI0225 03:01:03.405366   12840 roundrobin.go:217] Delete endpoint 10.128.0.4:6443 for service "openshift-multus/multus-admission-controller:"\nI0225 03:01:03.632526   12840 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:01:03.632549   12840 proxier.go:347] userspace syncProxyRules took 72.907315ms\nI0225 03:01:03.882813   12840 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:01:03.882838   12840 proxier.go:347] userspace syncProxyRules took 80.199833ms\nI0225 03:01:27.951260   12840 roundrobin.go:267] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-3589/service-test: to [10.129.2.19:80]\nI0225 03:01:27.951301   12840 roundrobin.go:217] Delete endpoint 10.128.2.14:80 for service "e2e-k8s-service-lb-available-3589/service-test:"\nI0225 03:01:28.201548   12840 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:01:28.201572   12840 proxier.go:347] userspace syncProxyRules took 70.878748ms\nI0225 03:01:29.961286   12840 roundrobin.go:267] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-3589/service-test: to [10.128.2.14:80 10.129.2.19:80]\nI0225 03:01:30.213478   12840 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:01:30.213509   12840 proxier.go:347] userspace syncProxyRules took 72.805737ms\nI0225 03:01:38.598782   12840 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0225 03:01:39.340521   12840 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.55:6443 10.129.0.73:6443 10.130.0.88:6443]\nI0225 03:01:39.613459   12840 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:01:39.613484   12840 proxier.go:347] userspace syncProxyRules took 78.731194ms\nF0225 03:01:44.243682   12840 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 25 03:01:57.005 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-86974895b4-cnjw8 node/ip-10-0-137-12.ec2.internal container=snapshot-controller container exited with code 255 (Error): 
Feb 25 03:02:11.451 E ns/openshift-sdn pod/sdn-mrbmz node/ip-10-0-142-81.ec2.internal container=sdn container exited with code 255 (Error): ress/router-default" on port 32088\nI0225 03:01:11.782766   11885 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0225 03:01:11.782804   11885 cmd.go:173] openshift-sdn network plugin registering startup\nI0225 03:01:11.782906   11885 cmd.go:177] openshift-sdn network plugin ready\nI0225 03:01:27.952360   11885 roundrobin.go:267] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-3589/service-test: to [10.129.2.19:80]\nI0225 03:01:27.952404   11885 roundrobin.go:217] Delete endpoint 10.128.2.14:80 for service "e2e-k8s-service-lb-available-3589/service-test:"\nI0225 03:01:28.227683   11885 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:01:28.227707   11885 proxier.go:347] userspace syncProxyRules took 75.762538ms\nI0225 03:01:29.961448   11885 roundrobin.go:267] LoadBalancerRR: Setting endpoints for e2e-k8s-service-lb-available-3589/service-test: to [10.128.2.14:80 10.129.2.19:80]\nI0225 03:01:30.233279   11885 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:01:30.233307   11885 proxier.go:347] userspace syncProxyRules took 83.460034ms\nI0225 03:01:33.900722   11885 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-dwj7s\nI0225 03:01:35.980330   11885 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-w7qcn got IP 10.128.0.55, ofport 56\nI0225 03:01:39.341195   11885 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.55:6443 10.129.0.73:6443 10.130.0.88:6443]\nI0225 03:01:39.691812   11885 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:01:39.691836   11885 proxier.go:347] userspace syncProxyRules took 121.687227ms\nI0225 03:02:09.990704   11885 proxier.go:368] userspace proxy: processing 0 service events\nI0225 03:02:09.990731   11885 proxier.go:347] userspace syncProxyRules took 96.426087ms\nF0225 03:02:10.516715   11885 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 25 03:02:50.580 E ns/openshift-multus pod/multus-bkz7d node/ip-10-0-142-81.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 25 03:03:28.338 E ns/openshift-multus pod/multus-28vp8 node/ip-10-0-137-35.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 25 03:03:50.230 E ns/openshift-dns pod/dns-default-5sdsl node/ip-10-0-156-20.ec2.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:03:50.230 E ns/openshift-dns pod/dns-default-5sdsl node/ip-10-0-156-20.ec2.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:03:53.378 E ns/openshift-machine-config-operator pod/machine-config-operator-7885dcc74d-pxwzw node/ip-10-0-133-131.ec2.internal container=machine-config-operator container exited with code 2 (Error): e:"", Name:"machine-config", UID:"e03db8ae-bc8d-4158-8e50-dcaeceddfff0", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-02-25-020833}]\nE0225 02:30:48.724016       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0225 02:30:48.725562       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0225 02:30:49.727818       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0225 02:30:53.492909       1 sync.go:61] [init mode] synced RenderConfig in 5.333306133s\nI0225 02:30:53.666463       1 sync.go:61] [init mode] synced MachineConfigPools in 172.879659ms\nI0225 02:31:25.127570       1 sync.go:61] [init mode] synced MachineConfigDaemon in 31.461069896s\nI0225 02:31:29.183342       1 sync.go:61] [init mode] synced MachineConfigController in 4.055722733s\nI0225 02:31:33.272539       1 sync.go:61] [init mode] synced MachineConfigServer in 4.08915058s\nI0225 02:31:47.280000       1 sync.go:61] [init mode] synced RequiredPools in 14.00741406s\nI0225 02:31:47.498650       1 sync.go:85] Initialization complete\nE0225 02:33:51.231646       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Feb 25 03:05:49.626 E ns/openshift-machine-config-operator pod/machine-config-daemon-xc8vg node/ip-10-0-156-20.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:06:05.542 E ns/openshift-machine-config-operator pod/machine-config-daemon-9zdd4 node/ip-10-0-137-12.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:06:09.273 E ns/openshift-machine-config-operator pod/machine-config-daemon-qfgsl node/ip-10-0-142-81.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:06:17.181 E ns/openshift-machine-config-operator pod/machine-config-daemon-rqs6q node/ip-10-0-155-150.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:06:20.663 E ns/openshift-machine-config-operator pod/machine-config-daemon-zjkhq node/ip-10-0-137-35.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:06:32.338 E ns/openshift-machine-config-operator pod/machine-config-controller-86f4b6f74-hqfw2 node/ip-10-0-142-81.ec2.internal container=machine-config-controller container exited with code 2 (Error): hineConfigPool worker\nI0225 02:46:11.723391       1 container_runtime_config_controller.go:712] Applied ImageConfig cluster on MachineConfigPool master\nI0225 02:46:11.991509       1 container_runtime_config_controller.go:712] Applied ImageConfig cluster on MachineConfigPool worker\nI0225 02:48:10.254861       1 container_runtime_config_controller.go:712] Applied ImageConfig cluster on MachineConfigPool master\nI0225 02:48:10.438239       1 container_runtime_config_controller.go:712] Applied ImageConfig cluster on MachineConfigPool worker\nI0225 02:56:07.626064       1 container_runtime_config_controller.go:712] Applied ImageConfig cluster on MachineConfigPool master\nI0225 02:56:07.707962       1 container_runtime_config_controller.go:712] Applied ImageConfig cluster on MachineConfigPool worker\nI0225 02:59:49.585936       1 node_controller.go:433] Pool worker: node ip-10-0-155-150.ec2.internal is now reporting unready: node ip-10-0-155-150.ec2.internal is reporting NotReady=False\nI0225 03:00:27.521246       1 node_controller.go:433] Pool worker: node ip-10-0-137-12.ec2.internal is now reporting unready: node ip-10-0-137-12.ec2.internal is reporting NotReady=False\nI0225 03:00:39.622319       1 node_controller.go:435] Pool worker: node ip-10-0-155-150.ec2.internal is now reporting ready\nI0225 03:01:07.551055       1 node_controller.go:435] Pool worker: node ip-10-0-137-12.ec2.internal is now reporting ready\nI0225 03:01:53.292807       1 node_controller.go:433] Pool master: node ip-10-0-133-131.ec2.internal is now reporting unready: node ip-10-0-133-131.ec2.internal is reporting NotReady=False\nI0225 03:02:23.324111       1 node_controller.go:435] Pool master: node ip-10-0-133-131.ec2.internal is now reporting ready\nI0225 03:03:00.029047       1 node_controller.go:433] Pool worker: node ip-10-0-137-35.ec2.internal is now reporting unready: node ip-10-0-137-35.ec2.internal is reporting NotReady=False\nI0225 03:03:50.079051       1 node_controller.go:435] Pool worker: node ip-10-0-137-35.ec2.internal is now reporting ready\n
Feb 25 03:08:50.516 E ns/openshift-machine-config-operator pod/machine-config-server-bb4ld node/ip-10-0-133-131.ec2.internal container=machine-config-server container exited with code 2 (Error): I0225 02:31:30.253093       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-295-gaf490f2d-dirty (af490f2d8ce4154905cb6e2a93de44ff4f142baa)\nI0225 02:31:30.254035       1 api.go:51] Launching server on :22624\nI0225 02:31:30.254488       1 api.go:51] Launching server on :22623\nI0225 02:33:00.719189       1 api.go:97] Pool worker requested by 10.0.137.100:57877\nI0225 02:33:03.010958       1 api.go:97] Pool worker requested by 10.0.152.238:30805\n
Feb 25 03:09:00.974 E ns/openshift-monitoring pod/kube-state-metrics-9cf667846-rd6zg node/ip-10-0-137-35.ec2.internal container=kube-state-metrics container exited with code 2 (Error): 
Feb 25 03:09:01.081 E ns/openshift-ingress pod/router-default-67d44dbdd8-7v4tr node/ip-10-0-137-35.ec2.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0225 03:05:30.185819       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0225 03:05:35.180839       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0225 03:05:48.715072       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0225 03:06:00.711932       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0225 03:06:05.709654       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0225 03:06:10.708467       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0225 03:06:16.328533       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0225 03:06:21.317219       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0225 03:06:26.326650       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0225 03:06:31.317260       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 25 03:09:01.118 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-6db948cd9f-vmshd node/ip-10-0-137-35.ec2.internal container=operator container exited with code 255 (Error): Progressing changed from False to True ("Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods"),Available changed from True to False ("Available: Waiting for Deployment to deploy csi-snapshot-controller pods")\nI0225 03:01:57.063069       1 operator.go:147] Finished syncing operator at 22.967203ms\nI0225 03:01:58.016146       1 operator.go:145] Starting syncing operator at 2020-02-25 03:01:58.016134621 +0000 UTC m=+341.546241223\nI0225 03:01:58.045486       1 status_controller.go:176] clusteroperator/csi-snapshot-controller diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-25T02:36:15Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-25T03:01:58Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-25T03:01:58Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-25T02:36:18Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0225 03:01:58.045616       1 operator.go:147] Finished syncing operator at 29.473668ms\nI0225 03:01:58.045668       1 operator.go:145] Starting syncing operator at 2020-02-25 03:01:58.045661761 +0000 UTC m=+341.575768304\nI0225 03:01:58.051820       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-csi-snapshot-controller-operator", Name:"csi-snapshot-controller-operator", UID:"2a198dd9-bd0e-456c-a68b-d4fe2bca830b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False (""),Available changed from False to True ("")\nI0225 03:01:58.066272       1 operator.go:147] Finished syncing operator at 20.604863ms\nI0225 03:08:59.847631       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0225 03:08:59.848037       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0225 03:08:59.848064       1 builder.go:210] server exited\n
Feb 25 03:09:01.984 E ns/openshift-cluster-machine-approver pod/machine-approver-6b756d75c-k2mr9 node/ip-10-0-142-81.ec2.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0225 02:56:23.858267       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0225 02:56:23.858341       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0225 02:56:23.858448       1 main.go:236] Starting Machine Approver\nI0225 02:56:23.958761       1 main.go:146] CSR csr-lwtcv added\nI0225 02:56:23.958803       1 main.go:149] CSR csr-lwtcv is already approved\nI0225 02:56:23.958821       1 main.go:146] CSR csr-pf6ns added\nI0225 02:56:23.958830       1 main.go:149] CSR csr-pf6ns is already approved\nI0225 02:56:23.958842       1 main.go:146] CSR csr-t2r25 added\nI0225 02:56:23.958851       1 main.go:149] CSR csr-t2r25 is already approved\nI0225 02:56:23.958862       1 main.go:146] CSR csr-zhf6d added\nI0225 02:56:23.958871       1 main.go:149] CSR csr-zhf6d is already approved\nI0225 02:56:23.958882       1 main.go:146] CSR csr-gv65t added\nI0225 02:56:23.958891       1 main.go:149] CSR csr-gv65t is already approved\nI0225 02:56:23.958911       1 main.go:146] CSR csr-jztkn added\nI0225 02:56:23.958920       1 main.go:149] CSR csr-jztkn is already approved\nI0225 02:56:23.958934       1 main.go:146] CSR csr-m9jbb added\nI0225 02:56:23.958943       1 main.go:149] CSR csr-m9jbb is already approved\nI0225 02:56:23.958956       1 main.go:146] CSR csr-pw4vp added\nI0225 02:56:23.958965       1 main.go:149] CSR csr-pw4vp is already approved\nI0225 02:56:23.959035       1 main.go:146] CSR csr-rhbcc added\nI0225 02:56:23.959046       1 main.go:149] CSR csr-rhbcc is already approved\nI0225 02:56:23.959058       1 main.go:146] CSR csr-rp4bd added\nI0225 02:56:23.959066       1 main.go:149] CSR csr-rp4bd is already approved\nI0225 02:56:23.959077       1 main.go:146] CSR csr-62w8v added\nI0225 02:56:23.959087       1 main.go:149] CSR csr-62w8v is already approved\nI0225 02:56:23.959106       1 main.go:146] CSR csr-6prmt added\nI0225 02:56:23.959115       1 main.go:149] CSR csr-6prmt is already approved\n
Feb 25 03:09:02.002 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-59dcf94b67-5scxz node/ip-10-0-142-81.ec2.internal container=kube-scheduler-operator-container container exited with code 255 (Error): reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-25T02:33:10Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 5","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-25T02:30:57Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0225 03:02:23.401937       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"cb33dccd-0e71-4553-9d91-01ff548782e1", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-133-131.ec2.internal\" not ready since 2020-02-25 03:01:53 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "NodeControllerDegraded: All master nodes are ready"\nI0225 03:02:23.410966       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"cb33dccd-0e71-4553-9d91-01ff548782e1", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-133-131.ec2.internal\" not ready since 2020-02-25 03:01:53 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "NodeControllerDegraded: All master nodes are ready"\nI0225 03:09:00.720127       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0225 03:09:00.721287       1 builder.go:210] server exited\n
Feb 25 03:09:03.467 E ns/openshift-machine-config-operator pod/machine-config-server-g29kn node/ip-10-0-156-20.ec2.internal container=machine-config-server container exited with code 2 (Error): I0225 02:31:30.044102       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-295-gaf490f2d-dirty (af490f2d8ce4154905cb6e2a93de44ff4f142baa)\nI0225 02:31:30.045058       1 api.go:51] Launching server on :22624\nI0225 02:31:30.045389       1 api.go:51] Launching server on :22623\n
Feb 25 03:09:06.104 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5d86448c99-jpd4k node/ip-10-0-142-81.ec2.internal container=kube-apiserver-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:09:06.307 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-69dd5499b5-dw7l5 node/ip-10-0-142-81.ec2.internal container=kube-storage-version-migrator-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:09:09.066 E ns/openshift-machine-config-operator pod/machine-config-server-lvrbr node/ip-10-0-142-81.ec2.internal container=machine-config-server container exited with code 2 (Error): I0225 02:31:31.898027       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-295-gaf490f2d-dirty (af490f2d8ce4154905cb6e2a93de44ff4f142baa)\nI0225 02:31:31.899677       1 api.go:51] Launching server on :22624\nI0225 02:31:31.899728       1 api.go:51] Launching server on :22623\nI0225 02:33:04.378571       1 api.go:97] Pool worker requested by 10.0.152.238:55984\n
Feb 25 03:09:15.982 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-86974895b4-cnjw8 node/ip-10-0-137-12.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 25 03:09:16.383 E kube-apiserver failed contacting the API: Get https://api.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=35327&timeout=9m56s&timeoutSeconds=596&watch=true: dial tcp 52.73.143.15:6443: connect: connection refused
Feb 25 03:09:26.592 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:11:35.531 E ns/openshift-etcd pod/etcd-ip-10-0-142-81.ec2.internal node/ip-10-0-142-81.ec2.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-25 02:50:46.149646 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-142-81.ec2.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-142-81.ec2.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-25 02:50:46.150344 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-25 02:50:46.150691 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-142-81.ec2.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-142-81.ec2.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-25 02:50:46.153018 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/02/25 02:50:46 grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.142.81:9978: connect: connection refused". Reconnecting...\n
Feb 25 03:11:35.570 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-142-81.ec2.internal node/ip-10-0-142-81.ec2.internal container=scheduler container exited with code 2 (Error): n-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-25 02:14:47 +0000 UTC to 2030-02-22 02:14:47 +0000 UTC (now=2020-02-25 02:52:11.387205421 +0000 UTC))\nI0225 02:52:11.387290       1 tlsconfig.go:179] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1582597858" [] issuer="kubelet-signer" (2020-02-25 02:30:58 +0000 UTC to 2020-02-26 02:14:52 +0000 UTC (now=2020-02-25 02:52:11.387272253 +0000 UTC))\nI0225 02:52:11.387365       1 tlsconfig.go:179] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-02-25 02:14:50 +0000 UTC to 2020-02-26 02:14:50 +0000 UTC (now=2020-02-25 02:52:11.387349512 +0000 UTC))\nI0225 02:52:11.387776       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1582597858" (2020-02-25 02:31:07 +0000 UTC to 2022-02-24 02:31:08 +0000 UTC (now=2020-02-25 02:52:11.387754896 +0000 UTC))\nI0225 02:52:11.389191       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582599128" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582599128" (2020-02-25 01:52:08 +0000 UTC to 2021-02-24 01:52:08 +0000 UTC (now=2020-02-25 02:52:11.389169387 +0000 UTC))\nI0225 02:52:11.391702       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n
Feb 25 03:11:35.606 E ns/openshift-controller-manager pod/controller-manager-v8wv5 node/ip-10-0-142-81.ec2.internal container=controller-manager container exited with code 1 (Error): I0225 02:57:17.075370       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0225 02:57:17.076806       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-8c6j24f3/stable@sha256:3afb6f5f3bccfe85823b7e2f14c112eae85cd1d3b3580c06b79b2a18887f1106"\nI0225 02:57:17.076828       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-8c6j24f3/stable@sha256:912a6797fdba6c38c4c1b8a8f0bf9bd38896cfba0813ce623b151d09c9bbb499"\nI0225 02:57:17.076896       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0225 02:57:17.077028       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 25 03:11:35.638 E ns/openshift-monitoring pod/node-exporter-w7vk5 node/ip-10-0-142-81.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:08:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:08:39Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:08:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:08:54Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:08:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:09:09Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:09:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 25 03:11:35.702 E ns/openshift-sdn pod/sdn-controller-c8dqs node/ip-10-0-142-81.ec2.internal container=sdn-controller container exited with code 2 (Error): I0225 03:00:05.533396       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 25 03:11:35.815 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-81.ec2.internal node/ip-10-0-142-81.ec2.internal container=kube-apiserver container exited with code 1 (Error): er.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:09:15.876336       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:09:15.876681       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:09:15.876713       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:09:15.876742       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:09:15.876769       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:09:15.876797       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:09:15.877011       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:09:15.878465       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0225 03:09:15.938839       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-142-81.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0225 03:09:15.939075       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\nW0225 03:09:15.984493       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [10.0.133.131 10.0.156.20]\n2020/02/25 03:09:15 httputil: ReverseProxy read error during body copy: unexpected EOF\nI0225 03:09:16.004314       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-142-81.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n
Feb 25 03:11:35.815 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-81.ec2.internal node/ip-10-0-142-81.ec2.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0225 02:52:05.110177       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 25 03:11:35.815 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-81.ec2.internal node/ip-10-0-142-81.ec2.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0225 03:09:00.002056       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:09:00.002485       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0225 03:09:10.016980       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:09:10.022387       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 25 03:11:35.815 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-81.ec2.internal node/ip-10-0-142-81.ec2.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0225 02:52:04.521753       1 cmd.go:200] Using insecure, self-signed certificates\nI0225 02:52:04.522240       1 crypto.go:580] Generating new CA for cert-regeneration-controller-signer@1582599124 cert, and key in /tmp/serving-cert-119051395/serving-signer.crt, /tmp/serving-cert-119051395/serving-signer.key\nI0225 02:52:05.603109       1 observer_polling.go:155] Starting file observer\nI0225 02:52:13.119882       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nI0225 03:09:15.878548       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0225 03:09:15.878585       1 leaderelection.go:67] leaderelection lost\n
Feb 25 03:11:35.855 E ns/openshift-multus pod/multus-admission-controller-w7qcn node/ip-10-0-142-81.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 25 03:11:35.902 E ns/openshift-machine-config-operator pod/machine-config-daemon-fqb9x node/ip-10-0-142-81.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:11:35.956 E ns/openshift-machine-config-operator pod/machine-config-server-klkc8 node/ip-10-0-142-81.ec2.internal container=machine-config-server container exited with code 2 (Error): I0225 03:09:14.567839       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-295-gaf490f2d-dirty (af490f2d8ce4154905cb6e2a93de44ff4f142baa)\nI0225 03:09:14.569626       1 api.go:51] Launching server on :22624\nI0225 03:09:14.569744       1 api.go:51] Launching server on :22623\n
Feb 25 03:11:36.103 E ns/openshift-cluster-node-tuning-operator pod/tuned-cqmg5 node/ip-10-0-142-81.ec2.internal container=tuned container exited with code 143 (Error): 19] extracting tuned profiles\nI0225 02:56:52.335244     631 tuned.go:469] profile "ip-10-0-142-81.ec2.internal" added, tuned profile requested: openshift-control-plane\nI0225 02:56:52.335331     631 tuned.go:170] disabling system tuned...\nI0225 02:56:52.339670     631 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0225 02:56:53.321877     631 tuned.go:393] getting recommended profile...\nI0225 02:56:53.547506     631 tuned.go:421] active profile () != recommended profile (openshift-control-plane)\nI0225 02:56:53.547593     631 tuned.go:286] starting tuned...\n2020-02-25 02:56:53,703 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-25 02:56:53,717 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-25 02:56:53,717 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-25 02:56:53,718 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-25 02:56:53,720 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-25 02:56:53,791 INFO     tuned.daemon.controller: starting controller\n2020-02-25 02:56:53,791 INFO     tuned.daemon.daemon: starting tuning\n2020-02-25 02:56:53,805 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-25 02:56:53,806 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-25 02:56:53,810 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-25 02:56:53,811 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-25 02:56:53,813 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-25 02:56:54,016 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-25 02:56:54,026 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Feb 25 03:11:36.224 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-81.ec2.internal node/ip-10-0-142-81.ec2.internal container=cluster-policy-controller container exited with code 1 (Error): "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinesets": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinesets", couldn't start monitor for resource "operators.coreos.com/v1, Resource=operatorsources": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorsources", couldn't start monitor for resource "cloudcredential.openshift.io/v1, Resource=credentialsrequests": unable to monitor quota for resource "cloudcredential.openshift.io/v1, Resource=credentialsrequests", couldn't start monitor for resource "network.operator.openshift.io/v1, Resource=operatorpkis": unable to monitor quota for resource "network.operator.openshift.io/v1, Resource=operatorpkis", couldn't start monitor for resource "network.openshift.io/v1, Resource=egressnetworkpolicies": unable to monitor quota for resource "network.openshift.io/v1, Resource=egressnetworkpolicies", couldn't start monitor for resource "metal3.io/v1alpha1, Resource=baremetalhosts": unable to monitor quota for resource "metal3.io/v1alpha1, Resource=baremetalhosts", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinehealthchecks": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinehealthchecks"]\nI0225 02:53:35.581561       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0225 02:53:35.581575       1 policy_controller.go:147] Started Origin Controllers\nI0225 02:53:35.581600       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0225 02:53:35.582178       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0225 02:53:35.583820       1 resource_quota_monitor.go:303] QuotaMonitor running\nI0225 02:53:35.635392       1 shared_informer.go:204] Caches are synced for resource quota \n
Feb 25 03:11:36.224 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-81.ec2.internal node/ip-10-0-142-81.ec2.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:08:40.834485       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:08:40.834838       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:08:43.891205       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:08:43.891653       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:08:50.847119       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:08:50.847522       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:08:53.900434       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:08:53.900797       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:09:00.880435       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:09:00.880794       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:09:03.941512       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:09:03.941890       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:09:10.908568       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:09:10.908953       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:09:13.951262       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:09:13.951934       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 25 03:11:36.224 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-81.ec2.internal node/ip-10-0-142-81.ec2.internal container=kube-controller-manager container exited with code 2 (Error): loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-25 02:14:47 +0000 UTC to 2030-02-22 02:14:47 +0000 UTC (now=2020-02-25 02:53:32.204704104 +0000 UTC))\nI0225 02:53:32.204758       1 tlsconfig.go:179] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-02-25 02:14:50 +0000 UTC to 2020-02-26 02:14:50 +0000 UTC (now=2020-02-25 02:53:32.204743762 +0000 UTC))\nI0225 02:53:32.205013       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1582597858" (2020-02-25 02:31:08 +0000 UTC to 2022-02-24 02:31:09 +0000 UTC (now=2020-02-25 02:53:32.204996485 +0000 UTC))\nI0225 02:53:32.205229       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582599212" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582599212" (2020-02-25 01:53:31 +0000 UTC to 2021-02-24 01:53:31 +0000 UTC (now=2020-02-25 02:53:32.205218446 +0000 UTC))\nI0225 02:53:32.205271       1 secure_serving.go:178] Serving securely on [::]:10257\nI0225 02:53:32.205315       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0225 02:53:32.205469       1 tlsconfig.go:241] Starting DynamicServingCertificateController\n
Feb 25 03:11:36.224 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-81.ec2.internal node/ip-10-0-142-81.ec2.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): 57] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-25 02:14:47 +0000 UTC to 2030-02-22 02:14:47 +0000 UTC (now=2020-02-25 02:53:34.820888722 +0000 UTC))\nI0225 02:53:34.820930       1 tlsconfig.go:157] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1582597858" [] issuer="kubelet-signer" (2020-02-25 02:30:58 +0000 UTC to 2020-02-26 02:14:52 +0000 UTC (now=2020-02-25 02:53:34.820917145 +0000 UTC))\nI0225 02:53:34.820965       1 tlsconfig.go:157] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-02-25 02:14:50 +0000 UTC to 2020-02-26 02:14:50 +0000 UTC (now=2020-02-25 02:53:34.8209512 +0000 UTC))\nI0225 02:53:34.821374       1 tlsconfig.go:179] loaded serving cert ["serving-cert::/tmp/serving-cert-644836781/tls.crt::/tmp/serving-cert-644836781/tls.key"]: "localhost" [serving] validServingFor=[localhost] issuer="cert-recovery-controller-signer@1582599213" (2020-02-25 02:53:32 +0000 UTC to 2020-03-26 02:53:33 +0000 UTC (now=2020-02-25 02:53:34.821354647 +0000 UTC))\nI0225 02:53:34.821705       1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582599214" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582599214" (2020-02-25 01:53:34 +0000 UTC to 2021-02-24 01:53:34 +0000 UTC (now=2020-02-25 02:53:34.821685545 +0000 UTC))\nI0225 03:09:16.023950       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0225 03:09:16.024065       1 leaderelection.go:67] leaderelection lost\n
Feb 25 03:11:36.260 E ns/openshift-multus pod/multus-lg7sq node/ip-10-0-142-81.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 25 03:11:41.211 E ns/openshift-multus pod/multus-lg7sq node/ip-10-0-142-81.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:11:43.286 E ns/openshift-monitoring pod/node-exporter-srxf9 node/ip-10-0-137-35.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:09:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:09:14Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:09:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:09:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:09:31Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:09:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:09:46Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 25 03:11:43.302 E ns/openshift-cluster-node-tuning-operator pod/tuned-wnrhv node/ip-10-0-137-35.ec2.internal container=tuned container exited with code 143 (Error): n: Unit file tuned.service does not exist.\nI0225 02:57:23.707046     505 tuned.go:393] getting recommended profile...\nI0225 02:57:23.827990     505 tuned.go:421] active profile () != recommended profile (openshift-node)\nI0225 02:57:23.828098     505 tuned.go:286] starting tuned...\n2020-02-25 02:57:23,938 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-25 02:57:23,944 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-25 02:57:23,944 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-25 02:57:23,945 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-25 02:57:23,946 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-25 02:57:23,980 INFO     tuned.daemon.controller: starting controller\n2020-02-25 02:57:23,980 INFO     tuned.daemon.daemon: starting tuning\n2020-02-25 02:57:23,990 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-25 02:57:23,991 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-25 02:57:23,994 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-25 02:57:23,996 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-25 02:57:23,997 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-25 02:57:24,108 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-25 02:57:24,118 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0225 03:09:16.323760     505 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:09:16.323760     505 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:09:57.016226     505 tuned.go:115] received signal: terminated\nI0225 03:09:57.016267     505 tuned.go:327] sending TERM to PID 565\n
Feb 25 03:11:43.346 E ns/openshift-multus pod/multus-v4rnm node/ip-10-0-137-35.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 25 03:11:43.376 E ns/openshift-machine-config-operator pod/machine-config-daemon-8cq8z node/ip-10-0-137-35.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:11:44.567 E ns/openshift-multus pod/multus-lg7sq node/ip-10-0-142-81.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:11:45.178 E ns/openshift-monitoring pod/node-exporter-srxf9 node/ip-10-0-137-35.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:11:46.161 E ns/openshift-multus pod/multus-v4rnm node/ip-10-0-137-35.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:11:47.539 E ns/openshift-machine-config-operator pod/machine-config-daemon-fqb9x node/ip-10-0-142-81.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 25 03:11:51.720 E ns/openshift-machine-config-operator pod/machine-config-daemon-8cq8z node/ip-10-0-137-35.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 25 03:11:55.423 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 25 03:11:59.228 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDaemonSet_UnavailablePod: APIServerDaemonSetDegraded: 1 of 3 requested instances are unavailable
Feb 25 03:12:00.600 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-12.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/25 02:56:57 Watching directory: "/etc/alertmanager/config"\n
Feb 25 03:12:00.600 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-12.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/25 02:56:57 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/25 02:56:57 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/25 02:56:57 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/25 02:56:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/25 02:56:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/25 02:56:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/25 02:56:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0225 02:56:57.996112       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/25 02:56:57 http.go:107: HTTPS: listening on [::]:9095\n
Feb 25 03:12:00.644 E ns/openshift-monitoring pod/prometheus-adapter-5f749fb48c-wjrgq node/ip-10-0-137-12.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0225 02:56:32.157194       1 adapter.go:93] successfully using in-cluster auth\nI0225 02:56:33.200106       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 25 03:12:00.811 E ns/openshift-kube-storage-version-migrator pod/migrator-55fd8f8db6-7wjrs node/ip-10-0-137-12.ec2.internal container=migrator container exited with code 2 (Error): I0225 03:09:16.325939       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Feb 25 03:12:00.839 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-769f77b974-5972k node/ip-10-0-137-12.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 25 03:12:00.872 E ns/openshift-monitoring pod/openshift-state-metrics-7956ff8c57-6npgd node/ip-10-0-137-12.ec2.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 25 03:12:03.068 E ns/openshift-cluster-version pod/cluster-version-operator-75f65b988c-jqlrp node/ip-10-0-142-81.ec2.internal container=cluster-version-operator container exited with code 255 (Error): I0225 03:12:01.797837       1 start.go:19] ClusterVersionOperator v1.0.0-194-g611490ff-dirty\nI0225 03:12:01.798262       1 merged_client_builder.go:122] Using in-cluster configuration\nI0225 03:12:01.804785       1 payload.go:210] Loading updatepayload from "/"\nI0225 03:12:02.810725       1 cvo.go:264] Verifying release authenticity: All release image digests must have GPG signatures from verifier-public-key-openshift-ci (D04761B116203B0C0859B61628B76E05B923888E: openshift-ci) - will check for signatures in containers/image format at https://storage.googleapis.com/openshift-release/test-1/signatures/openshift/release and from config maps in openshift-config-managed with label "release.openshift.io/verification-signatures"\nI0225 03:12:02.811245       1 leaderelection.go:241] attempting to acquire leader lease  openshift-cluster-version/version...\nF0225 03:12:02.811548       1 start.go:163] Unable to start metrics server: listen tcp 0.0.0.0:9099: bind: address already in use\n
Feb 25 03:12:09.556 E ns/openshift-machine-api pod/machine-api-operator-7c7b649bc9-g8lgk node/ip-10-0-156-20.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 25 03:12:11.715 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-764bffd5c5-z5jlj node/ip-10-0-156-20.ec2.internal container=cluster-node-tuning-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:12:12.268 E ns/openshift-operator-lifecycle-manager pod/packageserver-78bdf8568d-54qwz node/ip-10-0-142-81.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:12:22.736 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-35.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-25T03:12:18.251Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-25T03:12:18.253Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-25T03:12:18.254Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-25T03:12:18.255Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-25T03:12:18.255Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-25T03:12:18.255Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-25T03:12:18.255Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-25T03:12:18.255Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-25T03:12:18.255Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-25T03:12:18.255Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-25T03:12:18.255Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-25T03:12:18.255Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-25T03:12:18.255Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-25T03:12:18.255Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-25T03:12:18.256Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-25T03:12:18.256Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-25
Feb 25 03:12:41.592 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:13:41.592 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:14:01.917 E clusteroperator/monitoring changed Degraded to True: UpdatingAlertmanagerFailed: Failed to rollout the stack. Error: running task Updating Alertmanager failed: waiting for Alertmanager Route to become ready failed: waiting for RouteReady of alertmanager-main: the server is currently unable to handle the request (get routes.route.openshift.io alertmanager-main)
Feb 25 03:14:11.592 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:14:42.398 E ns/openshift-cluster-node-tuning-operator pod/tuned-hd7n5 node/ip-10-0-137-12.ec2.internal container=tuned container exited with code 143 (Error): on.daemon: using sleep interval of 1 second(s)\n2020-02-25 02:56:36,940 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-25 02:56:36,941 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-25 02:56:36,941 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-25 02:56:36,974 INFO     tuned.daemon.controller: starting controller\n2020-02-25 02:56:36,975 INFO     tuned.daemon.daemon: starting tuning\n2020-02-25 02:56:36,986 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-25 02:56:36,986 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-25 02:56:36,990 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-25 02:56:36,992 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-25 02:56:36,994 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-25 02:56:37,104 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-25 02:56:37,117 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0225 03:09:16.331431    1256 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:09:16.331821    1256 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:12:28.467992    1256 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:12:28.468387    1256 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0225 03:12:28.472334    1256 reflector.go:340] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: watch of *v1.Tuned ended with: very short watch: github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: Unexpected watch close - watch lasted less than a second and no items received\n
Feb 25 03:14:42.446 E ns/openshift-monitoring pod/node-exporter-hgpf7 node/ip-10-0-137-12.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:11:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:11:56Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:12:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:12:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:12:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:12:41Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:12:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 25 03:14:42.500 E ns/openshift-multus pod/multus-d5wsr node/ip-10-0-137-12.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 25 03:14:42.566 E ns/openshift-machine-config-operator pod/machine-config-daemon-zp74m node/ip-10-0-137-12.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:14:45.207 E ns/openshift-multus pod/multus-d5wsr node/ip-10-0-137-12.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:14:46.372 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-20.ec2.internal node/ip-10-0-156-20.ec2.internal container=kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): 2.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0225 03:09:17.908180       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0225 03:12:28.064118       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0225 03:12:28.064555       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0225 03:12:28.064667       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0225 03:12:28.064686       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0225 03:12:28.064780       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0225 03:12:28.064843       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0225 03:12:28.064917       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0225 03:12:28.064977       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0225 03:12:28.065004       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0225 03:12:28.065017       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0225 03:12:28.065032       1 certrotationcontroller.go:560] Shutting down CertRotation\nI0225 03:12:28.065052       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0225 03:12:28.065062       1 cabundlesyncer.go:86] CA bundle controller shut down\n
Feb 25 03:14:46.372 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-20.ec2.internal node/ip-10-0-156-20.ec2.internal container=kube-apiserver container exited with code 1 (Error): ror while dialing dial tcp 10.0.156.20:2379: connect: connection refused". Reconnecting...\nW0225 03:12:28.107097       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.156.20:2379: connect: connection refused". Reconnecting...\nW0225 03:12:28.107482       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.156.20:2379: connect: connection refused". Reconnecting...\nE0225 03:12:28.115537       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:12:28.115640       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:12:28.115658       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:12:28.117338       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:12:28.117362       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:12:28.117871       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:12:28.117897       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:12:28.118182       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:12:28.118252       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:12:28.118284       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0225 03:12:28.118398       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\n
Feb 25 03:14:46.372 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-20.ec2.internal node/ip-10-0-156-20.ec2.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0225 02:54:04.842334       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 25 03:14:46.372 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-156-20.ec2.internal node/ip-10-0-156-20.ec2.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0225 03:12:20.646973       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:12:20.651751       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0225 03:12:23.209434       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:12:23.210077       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 25 03:14:46.399 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-20.ec2.internal node/ip-10-0-156-20.ec2.internal container=cluster-policy-controller container exited with code 1 (Error): "operators.coreos.com/v1, Resource=operatorgroups": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorgroups", couldn't start monitor for resource "authorization.openshift.io/v1, Resource=rolebindingrestrictions": unable to monitor quota for resource "authorization.openshift.io/v1, Resource=rolebindingrestrictions", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machines": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machines", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinesets": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinesets", couldn't start monitor for resource "snapshot.storage.k8s.io/v1beta1, Resource=volumesnapshots": unable to monitor quota for resource "snapshot.storage.k8s.io/v1beta1, Resource=volumesnapshots", couldn't start monitor for resource "ingress.operator.openshift.io/v1, Resource=dnsrecords": unable to monitor quota for resource "ingress.operator.openshift.io/v1, Resource=dnsrecords", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=subscriptions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=subscriptions"]\nI0225 03:10:23.261700       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0225 03:10:23.261723       1 policy_controller.go:147] Started Origin Controllers\nI0225 03:10:23.261732       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0225 03:10:23.262231       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0225 03:10:23.265893       1 resource_quota_monitor.go:303] QuotaMonitor running\nI0225 03:10:23.336284       1 shared_informer.go:204] Caches are synced for resource quota \n
Feb 25 03:14:46.399 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-20.ec2.internal node/ip-10-0-156-20.ec2.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:11:50.647212       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:11:50.647642       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:11:51.146293       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:11:51.146854       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:12:00.765287       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:12:00.765773       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:12:01.167730       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:12:01.168208       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:12:10.789943       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:12:10.790737       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:12:11.193804       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:12:11.194181       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:12:20.815638       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:12:20.816411       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:12:21.207816       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:12:21.208342       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 25 03:14:46.399 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-20.ec2.internal node/ip-10-0-156-20.ec2.internal container=kube-controller-manager container exited with code 2 (Error): er_utils.go:603] Controller packageserver-59b754cd77 deleting pod openshift-operator-lifecycle-manager/packageserver-59b754cd77-dglzn\nI0225 03:12:20.913720       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"11970f3d-b8bf-4e53-a7a2-3508543017ed", APIVersion:"apps/v1", ResourceVersion:"37691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-59b754cd77 to 0\nI0225 03:12:20.945534       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-59b754cd77", UID:"7ff6809b-205a-4326-b6c6-add87f1607a1", APIVersion:"apps/v1", ResourceVersion:"37962", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-59b754cd77-dglzn\nI0225 03:12:20.952280       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-854d7958d7, need 2, creating 1\nI0225 03:12:20.953094       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"11970f3d-b8bf-4e53-a7a2-3508543017ed", APIVersion:"apps/v1", ResourceVersion:"37963", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-854d7958d7 to 2\nI0225 03:12:20.969144       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-854d7958d7", UID:"ee4d723f-d2b2-42f0-b7ca-ee9211557f8c", APIVersion:"apps/v1", ResourceVersion:"37967", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-854d7958d7-n6bp2\nI0225 03:12:21.158531       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\n
Feb 25 03:14:46.399 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-156-20.ec2.internal node/ip-10-0-156-20.ec2.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): timeoutSeconds=550&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0225 02:54:03.017127       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=24523&timeout=9m44s&timeoutSeconds=584&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0225 02:54:03.017213       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=24816&timeout=5m34s&timeoutSeconds=334&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0225 02:54:03.017260       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=24177&timeout=5m3s&timeoutSeconds=303&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0225 02:54:03.017305       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=24177&timeout=9m45s&timeoutSeconds=585&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0225 02:54:03.017349       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/configmaps?allowWatchBookmarks=true&resourceVersion=24523&timeout=8m27s&timeoutSeconds=507&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0225 03:12:28.109163       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0225 03:12:28.109790       1 csrcontroller.go:83] Shutting down CSR controller\nI0225 03:12:28.109859       1 csrcontroller.go:85] CSR controller shut down\nF0225 03:12:28.110002       1 builder.go:210] server exited\n
Feb 25 03:14:46.430 E ns/openshift-monitoring pod/node-exporter-j7jgn node/ip-10-0-156-20.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:11:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:11:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:11:31Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:11:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:11:46Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:12:01Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:12:16Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 25 03:14:46.472 E ns/openshift-cluster-node-tuning-operator pod/tuned-m8976 node/ip-10-0-156-20.ec2.internal container=tuned container exited with code 143 (Error): 19] extracting tuned profiles\nI0225 02:57:18.675528    1556 tuned.go:469] profile "ip-10-0-156-20.ec2.internal" added, tuned profile requested: openshift-control-plane\nI0225 02:57:18.675548    1556 tuned.go:170] disabling system tuned...\nI0225 02:57:18.681880    1556 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0225 02:57:19.661640    1556 tuned.go:393] getting recommended profile...\nI0225 02:57:19.814053    1556 tuned.go:421] active profile () != recommended profile (openshift-control-plane)\nI0225 02:57:19.814131    1556 tuned.go:286] starting tuned...\n2020-02-25 02:57:19,948 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-25 02:57:19,960 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-25 02:57:19,960 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-25 02:57:19,961 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-25 02:57:19,962 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-25 02:57:20,004 INFO     tuned.daemon.controller: starting controller\n2020-02-25 02:57:20,004 INFO     tuned.daemon.daemon: starting tuning\n2020-02-25 02:57:20,018 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-25 02:57:20,019 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-25 02:57:20,024 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-25 02:57:20,026 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-25 02:57:20,028 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-25 02:57:20,165 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-25 02:57:20,179 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Feb 25 03:14:46.498 E ns/openshift-multus pod/multus-admission-controller-pn4fl node/ip-10-0-156-20.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 25 03:14:46.517 E ns/openshift-multus pod/multus-286gj node/ip-10-0-156-20.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 25 03:14:46.561 E ns/openshift-machine-config-operator pod/machine-config-daemon-7kqw2 node/ip-10-0-156-20.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:14:46.580 E ns/openshift-machine-config-operator pod/machine-config-server-jvbw7 node/ip-10-0-156-20.ec2.internal container=machine-config-server container exited with code 2 (Error): I0225 03:09:08.116203       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-295-gaf490f2d-dirty (af490f2d8ce4154905cb6e2a93de44ff4f142baa)\nI0225 03:09:08.117527       1 api.go:51] Launching server on :22624\nI0225 03:09:08.117690       1 api.go:51] Launching server on :22623\n
Feb 25 03:14:46.594 E ns/openshift-controller-manager pod/controller-manager-c9ws4 node/ip-10-0-156-20.ec2.internal container=controller-manager container exited with code 1 (Error): 3       1 docker_registry_service.go:296] Updating registry URLs from map[172.30.51.225:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.51.225:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0225 02:58:13.045142       1 build_controller.go:474] Starting build controller\nI0225 02:58:13.045161       1 build_controller.go:476] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nW0225 03:09:15.988446       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 599; INTERNAL_ERROR") has prevented the request from succeeding\nW0225 03:09:15.988644       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 615; INTERNAL_ERROR") has prevented the request from succeeding\nI0225 03:09:46.792195       1 trace.go:116] Trace[1235990716]: "Reflector ListAndWatch" name:github.com/openshift/client-go/image/informers/externalversions/factory.go:101 (started: 2020-02-25 03:09:16.736244389 +0000 UTC m=+725.401401109) (total time: 30.055903561s):\nTrace[1235990716]: [30.055903561s] [30.055903561s] END\nE0225 03:09:46.792220       1 reflector.go:156] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io)\nE0225 03:09:56.813564       1 reflector.go:320] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io)\n
Feb 25 03:14:46.730 E ns/openshift-sdn pod/sdn-controller-qqwrt node/ip-10-0-156-20.ec2.internal container=sdn-controller container exited with code 2 (Error): I0225 02:59:58.758242       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0225 02:59:58.786443       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"dae4e048-8707-4836-b0c5-9dbc82fe89e0", ResourceVersion:"30316", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718194422, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-156-20\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-25T02:27:02Z\",\"renewTime\":\"2020-02-25T02:59:58Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-156-20 became leader'\nI0225 02:59:58.786564       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0225 02:59:58.794063       1 master.go:51] Initializing SDN master\nI0225 02:59:58.844024       1 network_controller.go:61] Started OpenShift Network Controller\n
Feb 25 03:14:46.761 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-156-20.ec2.internal node/ip-10-0-156-20.ec2.internal container=scheduler container exited with code 2 (Error): 179] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-25 02:14:47 +0000 UTC to 2030-02-22 02:14:47 +0000 UTC (now=2020-02-25 02:54:12.044259722 +0000 UTC))\nI0225 02:54:12.044300       1 tlsconfig.go:179] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1582597858" [] issuer="kubelet-signer" (2020-02-25 02:30:58 +0000 UTC to 2020-02-26 02:14:52 +0000 UTC (now=2020-02-25 02:54:12.04428722 +0000 UTC))\nI0225 02:54:12.044325       1 tlsconfig.go:179] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-02-25 02:14:50 +0000 UTC to 2020-02-26 02:14:50 +0000 UTC (now=2020-02-25 02:54:12.044312619 +0000 UTC))\nI0225 02:54:12.044800       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1582597858" (2020-02-25 02:31:07 +0000 UTC to 2022-02-24 02:31:08 +0000 UTC (now=2020-02-25 02:54:12.044779111 +0000 UTC))\nI0225 02:54:12.056737       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582599244" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582599243" (2020-02-25 01:54:03 +0000 UTC to 2021-02-24 01:54:03 +0000 UTC (now=2020-02-25 02:54:12.045097656 +0000 UTC))\n
Feb 25 03:14:52.017 E ns/openshift-multus pod/multus-286gj node/ip-10-0-156-20.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:14:52.238 E ns/openshift-etcd pod/etcd-ip-10-0-156-20.ec2.internal node/ip-10-0-156-20.ec2.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-25 02:51:24.554528 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-156-20.ec2.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-156-20.ec2.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-25 02:51:24.556351 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-25 02:51:24.556985 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-156-20.ec2.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-156-20.ec2.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-25 02:51:24.566760 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\n
Feb 25 03:14:52.287 E ns/openshift-machine-config-operator pod/machine-config-daemon-zp74m node/ip-10-0-137-12.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 25 03:14:54.277 E ns/openshift-multus pod/multus-286gj node/ip-10-0-156-20.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:15:00.761 E ns/openshift-marketplace pod/certified-operators-55965cbb97-d5rn7 node/ip-10-0-155-150.ec2.internal container=certified-operators container exited with code 2 (Error): 
Feb 25 03:15:00.856 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-6db948cd9f-djbm4 node/ip-10-0-155-150.ec2.internal container=operator container exited with code 255 (Error): ition ended with: very short watch: k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: Unexpected watch close - watch lasted less than a second and no items received\nW0225 03:12:28.475829       1 reflector.go:326] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: very short watch: k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Unexpected watch close - watch lasted less than a second and no items received\nW0225 03:12:28.475902       1 reflector.go:326] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.CSISnapshotController ended with: very short watch: github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: Unexpected watch close - watch lasted less than a second and no items received\nI0225 03:12:30.218935       1 operator.go:145] Starting syncing operator at 2020-02-25 03:12:30.218924972 +0000 UTC m=+201.468814469\nI0225 03:12:30.326330       1 operator.go:147] Finished syncing operator at 107.395559ms\nI0225 03:12:30.326392       1 operator.go:145] Starting syncing operator at 2020-02-25 03:12:30.326383587 +0000 UTC m=+201.576272882\nI0225 03:12:30.422281       1 operator.go:147] Finished syncing operator at 95.886415ms\nI0225 03:12:31.031560       1 operator.go:145] Starting syncing operator at 2020-02-25 03:12:31.031549055 +0000 UTC m=+202.281438435\nI0225 03:12:31.071291       1 operator.go:147] Finished syncing operator at 39.733698ms\nI0225 03:14:58.424003       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0225 03:14:58.424625       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0225 03:14:58.424655       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0225 03:14:58.424671       1 logging_controller.go:93] Shutting down LogLevelController\nF0225 03:14:58.424757       1 builder.go:243] stopped\n
Feb 25 03:15:01.862 E ns/openshift-monitoring pod/prometheus-adapter-5f749fb48c-s4j5d node/ip-10-0-155-150.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0225 02:56:27.544434       1 adapter.go:93] successfully using in-cluster auth\nI0225 02:56:28.547553       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 25 03:15:01.982 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-150.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-25T02:57:29.067Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-25T02:57:29.071Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-25T02:57:29.072Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-25T02:57:29.073Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-25T02:57:29.073Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-25T02:57:29.073Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-25T02:57:29.074Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-25T02:57:29.074Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-25
Feb 25 03:15:01.982 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-150.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/25 02:57:32 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 25 03:15:01.982 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-150.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-25T02:57:30.692254116Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2020-02-25T02:57:30.692385308Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-25T02:57:30.695210761Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-02-25T02:57:35.694201512Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-02-25T02:57:40.694009919Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-25T02:57:45.946865914Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 25 03:15:02.053 E ns/openshift-marketplace pod/community-operators-5b9ccfdf76-gvd42 node/ip-10-0-155-150.ec2.internal container=community-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:15:02.120 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-150.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:15:02.120 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-150.ec2.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:15:02.120 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-155-150.ec2.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:15:02.316 E ns/openshift-machine-config-operator pod/machine-config-daemon-7kqw2 node/ip-10-0-156-20.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 25 03:15:02.908 E ns/openshift-monitoring pod/telemeter-client-5f69844bb8-vd9ks node/ip-10-0-155-150.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Feb 25 03:15:02.908 E ns/openshift-monitoring pod/telemeter-client-5f69844bb8-vd9ks node/ip-10-0-155-150.ec2.internal container=reload container exited with code 2 (Error): 
Feb 25 03:15:06.385 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 25 03:15:15.749 E ns/openshift-console-operator pod/console-operator-f585b5f9-z5lzm node/ip-10-0-133-131.ec2.internal container=console-operator container exited with code 255 (Error): rue","type":"Upgradeable"}]}}\nI0225 03:14:26.904092       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"a72557c4-694c-4e65-b56b-3f37564f1618", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "OAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)" to ""\nI0225 03:15:12.951678       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0225 03:15:12.955351       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0225 03:15:12.964073       1 controller.go:70] Shutting down Console\nI0225 03:15:12.964096       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0225 03:15:12.964128       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0225 03:15:12.964761       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0225 03:15:12.964797       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0225 03:15:12.964812       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0225 03:15:12.964825       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0225 03:15:12.964875       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0225 03:15:12.965877       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0225 03:15:12.965973       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0225 03:15:12.966001       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0225 03:15:12.967419       1 builder.go:210] server exited\n
Feb 25 03:15:15.842 E ns/openshift-cluster-machine-approver pod/machine-approver-6b756d75c-22fpm node/ip-10-0-133-131.ec2.internal container=machine-approver-controller container exited with code 2 (Error): 3:09:03.599195       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0225 03:09:03.599706       1 main.go:236] Starting Machine Approver\nI0225 03:09:03.700050       1 main.go:146] CSR csr-zhf6d added\nI0225 03:09:03.700188       1 main.go:149] CSR csr-zhf6d is already approved\nI0225 03:09:03.700263       1 main.go:146] CSR csr-62w8v added\nI0225 03:09:03.700379       1 main.go:149] CSR csr-62w8v is already approved\nI0225 03:09:03.700430       1 main.go:146] CSR csr-6prmt added\nI0225 03:09:03.700470       1 main.go:149] CSR csr-6prmt is already approved\nI0225 03:09:03.700512       1 main.go:146] CSR csr-gv65t added\nI0225 03:09:03.700585       1 main.go:149] CSR csr-gv65t is already approved\nI0225 03:09:03.700630       1 main.go:146] CSR csr-m9jbb added\nI0225 03:09:03.700670       1 main.go:149] CSR csr-m9jbb is already approved\nI0225 03:09:03.700744       1 main.go:146] CSR csr-rhbcc added\nI0225 03:09:03.700788       1 main.go:149] CSR csr-rhbcc is already approved\nI0225 03:09:03.700827       1 main.go:146] CSR csr-rp4bd added\nI0225 03:09:03.700865       1 main.go:149] CSR csr-rp4bd is already approved\nI0225 03:09:03.700937       1 main.go:146] CSR csr-t2r25 added\nI0225 03:09:03.700981       1 main.go:149] CSR csr-t2r25 is already approved\nI0225 03:09:03.701037       1 main.go:146] CSR csr-jztkn added\nI0225 03:09:03.701109       1 main.go:149] CSR csr-jztkn is already approved\nI0225 03:09:03.701167       1 main.go:146] CSR csr-lwtcv added\nI0225 03:09:03.701210       1 main.go:149] CSR csr-lwtcv is already approved\nI0225 03:09:03.701290       1 main.go:146] CSR csr-pf6ns added\nI0225 03:09:03.701347       1 main.go:149] CSR csr-pf6ns is already approved\nI0225 03:09:03.701395       1 main.go:146] CSR csr-pw4vp added\nI0225 03:09:03.701469       1 main.go:149] CSR csr-pw4vp is already approved\nW0225 03:12:29.790672       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 25823 (38122)\n
Feb 25 03:15:17.277 E ns/openshift-console pod/console-778cd477b8-88k8r node/ip-10-0-133-131.ec2.internal container=console container exited with code 2 (Error): 2020-02-25T03:09:09Z cmd/main: cookies are secure!\n2020-02-25T03:09:10Z cmd/main: Binding to [::]:8443...\n2020-02-25T03:09:10Z cmd/main: using TLS\n2020-02-25T03:12:54Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-02-25T03:12:54Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Feb 25 03:15:17.394 E ns/openshift-etcd-operator pod/etcd-operator-76df968c-2pjm6 node/ip-10-0-133-131.ec2.internal container=operator container exited with code 255 (Error): ownController\nI0225 03:15:14.533229       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0225 03:15:14.535407       1 base_controller.go:74] Shutting down InstallerController ...\nI0225 03:15:14.535429       1 status_controller.go:212] Shutting down StatusSyncer-etcd\nI0225 03:15:14.535475       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0225 03:15:14.535495       1 base_controller.go:39] All RevisionController workers have been terminated\nI0225 03:15:14.535541       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0225 03:15:14.535551       1 base_controller.go:39] All NodeController workers have been terminated\nI0225 03:15:14.535593       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0225 03:15:14.535603       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0225 03:15:14.535623       1 base_controller.go:49] Shutting down worker of  controller ...\nI0225 03:15:14.535634       1 base_controller.go:39] All  workers have been terminated\nI0225 03:15:14.535652       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0225 03:15:14.535668       1 base_controller.go:39] All PruneController workers have been terminated\nI0225 03:15:14.535743       1 base_controller.go:74] Shutting down  ...\nI0225 03:15:14.535761       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0225 03:15:14.536017       1 etcdmemberscontroller.go:192] Shutting down EtcdMembersController\nI0225 03:15:14.536108       1 base_controller.go:49] Shutting down worker of  controller ...\nI0225 03:15:14.536118       1 base_controller.go:39] All  workers have been terminated\nI0225 03:15:14.536143       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0225 03:15:14.536153       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nF0225 03:15:14.538490       1 builder.go:243] stopped\n
Feb 25 03:15:17.415 E ns/openshift-insights pod/insights-operator-858cd558d7-fvhfq node/ip-10-0-133-131.ec2.internal container=operator container exited with code 2 (Error):  configobserver.go:107] Refreshing configuration from cluster secret\nI0225 03:11:50.322027       1 httplog.go:90] GET /metrics: (7.46033ms) 200 [Prometheus/2.15.2 10.131.0.32:39860]\nI0225 03:11:51.388582       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 1 items received\nI0225 03:12:20.320442       1 httplog.go:90] GET /metrics: (5.820839ms) 200 [Prometheus/2.15.2 10.131.0.32:39860]\nI0225 03:12:20.458595       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 1 items received\nI0225 03:12:31.518718       1 httplog.go:90] GET /metrics: (6.456682ms) 200 [Prometheus/2.15.2 10.128.2.15:60862]\nI0225 03:12:49.489024       1 status.go:298] The operator is healthy\nI0225 03:12:50.319904       1 httplog.go:90] GET /metrics: (5.262449ms) 200 [Prometheus/2.15.2 10.131.0.32:39860]\nI0225 03:13:01.511386       1 httplog.go:90] GET /metrics: (6.242613ms) 200 [Prometheus/2.15.2 10.128.2.15:60862]\nI0225 03:13:20.321146       1 httplog.go:90] GET /metrics: (6.411702ms) 200 [Prometheus/2.15.2 10.131.0.32:39860]\nI0225 03:13:31.515613       1 httplog.go:90] GET /metrics: (8.691606ms) 200 [Prometheus/2.15.2 10.128.2.15:60862]\nI0225 03:13:50.332000       1 httplog.go:90] GET /metrics: (16.601664ms) 200 [Prometheus/2.15.2 10.131.0.32:39860]\nI0225 03:14:01.513899       1 httplog.go:90] GET /metrics: (8.709433ms) 200 [Prometheus/2.15.2 10.128.2.15:60862]\nI0225 03:14:20.321018       1 httplog.go:90] GET /metrics: (6.272128ms) 200 [Prometheus/2.15.2 10.131.0.32:39860]\nI0225 03:14:31.511291       1 httplog.go:90] GET /metrics: (6.11362ms) 200 [Prometheus/2.15.2 10.128.2.15:60862]\nI0225 03:14:49.489630       1 status.go:298] The operator is healthy\nI0225 03:14:50.320187       1 httplog.go:90] GET /metrics: (5.604749ms) 200 [Prometheus/2.15.2 10.131.0.32:39860]\nI0225 03:15:01.510884       1 httplog.go:90] GET /metrics: (5.594507ms) 200 [Prometheus/2.15.2 10.128.2.15:60862]\n
Feb 25 03:15:18.635 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-137-12.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-25T03:15:14.480Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-25T03:15:14.484Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-25T03:15:14.488Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-25T03:15:14.489Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-25T03:15:14.489Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-25T03:15:14.489Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-25T03:15:14.490Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-25T03:15:14.490Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-25T03:15:14.490Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-25T03:15:14.490Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-25T03:15:14.490Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-25T03:15:14.490Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-25T03:15:14.490Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-25T03:15:14.491Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-25T03:15:14.494Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-25T03:15:14.494Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-25
Feb 25 03:15:20.657 E ns/openshift-machine-config-operator pod/machine-config-operator-6c8944bf86-cfvtg node/ip-10-0-133-131.ec2.internal container=machine-config-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:15:22.107 E ns/openshift-monitoring pod/thanos-querier-679656699d-pwbk7 node/ip-10-0-133-131.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/25 02:56:59 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/25 02:56:59 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/25 02:56:59 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/25 02:56:59 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/25 02:56:59 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/25 02:56:59 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/25 02:56:59 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/25 02:56:59 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/25 02:56:59 http.go:107: HTTPS: listening on [::]:9091\nI0225 02:56:59.073163       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 25 03:15:24.469 E ns/openshift-operator-lifecycle-manager pod/packageserver-854d7958d7-hpjqp node/ip-10-0-133-131.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:15:24.512 E ns/openshift-service-ca-operator pod/service-ca-operator-85bcb85fb8-g78nr node/ip-10-0-133-131.ec2.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:15:24.561 E ns/openshift-service-ca pod/service-ca-685ff7556f-4qzrz node/ip-10-0-133-131.ec2.internal container=service-ca-controller container exited with code 255 (Error): 
Feb 25 03:15:37.165 E ns/openshift-monitoring pod/prometheus-operator-5fcd6866fb-msqcj node/ip-10-0-156-20.ec2.internal container=prometheus-operator container exited with code 1 (Error): ts=2020-02-25T03:15:36.253339299Z caller=main.go:199 msg="Starting Prometheus Operator version '0.35.0'."\nts=2020-02-25T03:15:36.290056837Z caller=main.go:96 msg="Staring insecure server on :8080"\nts=2020-02-25T03:15:36.294720532Z caller=main.go:288 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://172.30.0.1:443/version?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 25 03:15:41.592 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:15:49.559 E ns/openshift-marketplace pod/marketplace-operator-755f66894c-msrg6 node/ip-10-0-156-20.ec2.internal container=marketplace-operator container exited with code 1 (Error): 
Feb 25 03:16:55.396 E ns/openshift-operator-lifecycle-manager pod/packageserver-854d7958d7-n6bp2 node/ip-10-0-142-81.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 25 03:16:56.592 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 25 03:17:40.757 E ns/openshift-monitoring pod/node-exporter-8xppm node/ip-10-0-155-150.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:14:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:14:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:15:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:15:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:15:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:15:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:15:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 25 03:17:40.776 E ns/openshift-cluster-node-tuning-operator pod/tuned-tlhdw node/ip-10-0-155-150.ec2.internal container=tuned container exited with code 143 (Error): 02:56:49,635 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-25 02:56:49,635 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-25 02:56:49,636 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-25 02:56:49,636 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-25 02:56:49,728 INFO     tuned.daemon.controller: starting controller\n2020-02-25 02:56:49,728 INFO     tuned.daemon.daemon: starting tuning\n2020-02-25 02:56:49,740 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-25 02:56:49,741 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-25 02:56:49,745 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-25 02:56:49,746 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-25 02:56:49,748 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-25 02:56:49,877 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-25 02:56:49,883 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0225 03:12:28.461900    2263 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:12:28.462820    2263 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:15:33.625321    2263 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:15:33.625346    2263 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:15:56.818046    2263 tuned.go:115] received signal: terminated\nI0225 03:15:56.818095    2263 tuned.go:327] sending TERM to PID 2476\n2020-02-25 03:15:56,818 INFO     tuned.daemon.controller: terminating controller\n2020-02-25 03:15:56,818 INFO     tuned.daemon.daemon: stopping tuning\n
Feb 25 03:17:40.838 E ns/openshift-multus pod/multus-njvc5 node/ip-10-0-155-150.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 25 03:17:40.902 E ns/openshift-machine-config-operator pod/machine-config-daemon-8kwmz node/ip-10-0-155-150.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:17:43.658 E ns/openshift-multus pod/multus-njvc5 node/ip-10-0-155-150.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:17:48.131 E ns/openshift-marketplace pod/certified-operators-55965cbb97-n497m node/ip-10-0-137-12.ec2.internal container=certified-operators container exited with code 2 (Error): 
Feb 25 03:17:49.247 E ns/openshift-machine-config-operator pod/machine-config-daemon-8kwmz node/ip-10-0-155-150.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 25 03:17:55.524 E ns/openshift-cluster-node-tuning-operator pod/tuned-mhfwf node/ip-10-0-133-131.ec2.internal container=tuned container exited with code 143 (Error): :57:06,176 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-25 02:57:06,177 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-25 02:57:06,177 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-25 02:57:06,178 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-25 02:57:06,240 INFO     tuned.daemon.controller: starting controller\n2020-02-25 02:57:06,241 INFO     tuned.daemon.daemon: starting tuning\n2020-02-25 02:57:06,253 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-25 02:57:06,254 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-25 02:57:06,257 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-25 02:57:06,259 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-25 02:57:06,263 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-25 02:57:06,517 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-25 02:57:06,527 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0225 03:09:16.328214    1910 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:09:16.328792    1910 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0225 03:15:23.917759    1910 tuned.go:494] profile "ip-10-0-133-131.ec2.internal" changed, tuned profile requested: openshift-node\nI0225 03:15:24.023692    1910 tuned.go:494] profile "ip-10-0-133-131.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0225 03:15:24.854953    1910 tuned.go:393] getting recommended profile...\nI0225 03:15:25.043001    1910 tuned.go:430] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Feb 25 03:17:55.568 E ns/openshift-sdn pod/sdn-controller-xf5x4 node/ip-10-0-133-131.ec2.internal container=sdn-controller container exited with code 2 (Error): I0225 03:00:11.653692       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0225 03:13:35.043376       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"dae4e048-8707-4836-b0c5-9dbc82fe89e0", ResourceVersion:"38612", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718194422, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-133-131\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-25T03:13:35Z\",\"renewTime\":\"2020-02-25T03:13:35Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-133-131 became leader'\nI0225 03:13:35.043499       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0225 03:13:35.050385       1 master.go:51] Initializing SDN master\nI0225 03:13:35.074172       1 network_controller.go:61] Started OpenShift Network Controller\n
Feb 25 03:17:55.585 E ns/openshift-etcd pod/etcd-ip-10-0-133-131.ec2.internal node/ip-10-0-133-131.ec2.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-25 02:50:20.567613 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-133-131.ec2.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-133-131.ec2.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-25 02:50:20.568718 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-25 02:50:20.569274 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-133-131.ec2.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-133-131.ec2.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-25 02:50:20.571340 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/02/25 02:50:20 grpc: addrConn.createTransport failed to connect to {https://etcd-0.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.133.131:9978: connect: connection refused". Reconnecting...\n
Feb 25 03:17:55.625 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-131.ec2.internal node/ip-10-0-133-131.ec2.internal container=kube-apiserver container exited with code 1 (Error): wn secret e2e-k8s-sig-apps-deployment-upgrade-4079/default-dockercfg-wvvm9\nI0225 03:14:58.507865       1 node_authorizer.go:193] NODE DENY: node "ip-10-0-137-12.ec2.internal" cannot get unknown secret e2e-k8s-sig-apps-deployment-upgrade-4079/default-token-7gcvq\nI0225 03:14:58.637274       1 node_authorizer.go:193] NODE DENY: node "ip-10-0-137-12.ec2.internal" cannot get secret openshift-console/default-dockercfg-chvxx, no relationship to this object was found in the node authorizer graph\nI0225 03:14:58.638471       1 node_authorizer.go:193] NODE DENY: node "ip-10-0-137-12.ec2.internal" cannot get secret openshift-console/default-token-zb7dv, no relationship to this object was found in the node authorizer graph\nI0225 03:15:04.215757       1 cacher.go:782] cacher (*apiregistration.APIService): 1 objects queued in incoming channel.\nW0225 03:15:04.615875       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-8c6j24f3-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.156.20:2379: i/o timeout". Reconnecting...\nI0225 03:15:13.242747       1 cacher.go:782] cacher (*apps.ReplicaSet): 1 objects queued in incoming channel.\nI0225 03:15:13.710482       1 node_authorizer.go:193] NODE DENY: node "ip-10-0-137-35.ec2.internal" cannot get secret openshift-console/default-dockercfg-chvxx, no relationship to this object was found in the node authorizer graph\nI0225 03:15:14.766267       1 controller.go:606] quota admission added evaluator for: replicasets.apps\nI0225 03:15:33.397308       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-133-131.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0225 03:15:33.397529       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Feb 25 03:17:55.625 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-131.ec2.internal node/ip-10-0-133-131.ec2.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0225 02:56:07.129221       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 25 03:17:55.625 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-131.ec2.internal node/ip-10-0-133-131.ec2.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0225 03:15:16.314441       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:15:16.314855       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0225 03:15:26.410686       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:15:26.411108       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 25 03:17:55.625 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-131.ec2.internal node/ip-10-0-133-131.ec2.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): lient_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0225 03:15:33.419657       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0225 03:15:33.420098       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0225 03:15:33.420129       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0225 03:15:33.420148       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0225 03:15:33.420164       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0225 03:15:33.420182       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0225 03:15:33.420198       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0225 03:15:33.420217       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0225 03:15:33.420234       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0225 03:15:33.429322       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0225 03:15:33.429336       1 certrotationcontroller.go:560] Shutting down CertRotation\nI0225 03:15:33.429348       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0225 03:15:33.532061       1 cabundlesyncer.go:86] CA bundle controller shut down\nE0225 03:15:33.650570       1 leaderelection.go:307] Failed to release lock: Put https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: read tcp [::1]:54864->[::1]:6443: read: connection reset by peer\nF0225 03:15:33.650628       1 leaderelection.go:67] leaderelection lost\n
Feb 25 03:17:55.659 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-131.ec2.internal node/ip-10-0-133-131.ec2.internal container=cluster-policy-controller container exited with code 1 (Error): I0225 02:52:31.352220       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0225 02:52:31.355130       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0225 02:52:31.356238       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Feb 25 03:17:55.659 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-131.ec2.internal node/ip-10-0-133-131.ec2.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:14:59.954280       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:14:59.954706       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:15:01.864866       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:15:01.865221       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:15:09.962138       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:15:09.962477       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:15:11.872360       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:15:11.872693       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:15:19.972599       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:15:19.973056       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:15:21.897318       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:15:21.897789       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:15:29.980149       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:15:29.981001       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0225 03:15:31.894614       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0225 03:15:31.894991       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 25 03:17:55.659 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-131.ec2.internal node/ip-10-0-133-131.ec2.internal container=kube-controller-manager container exited with code 2 (Error): 'Normal' reason: 'SuccessfulCreate' Created pod: etcd-quorum-guard-f89fff966-c8mtc\nI0225 03:15:23.988971       1 controller.go:661] Detected change in list of current cluster nodes. New node set: map[ip-10-0-137-12.ec2.internal:{} ip-10-0-137-35.ec2.internal:{} ip-10-0-142-81.ec2.internal:{} ip-10-0-156-20.ec2.internal:{}]\nI0225 03:15:24.036618       1 aws_loadbalancer.go:1375] Instances added to load-balancer a4f8dbdc9d35644aea2a69a602f49a45\nI0225 03:15:24.051010       1 aws_loadbalancer.go:1386] Instances removed from load-balancer a4f8dbdc9d35644aea2a69a602f49a45\nI0225 03:15:24.549773       1 event.go:281] Event(v1.ObjectReference{Kind:"Service", Namespace:"e2e-k8s-service-lb-available-3589", Name:"service-test", UID:"4f8dbdc9-d356-44ae-a2a6-9a602f49a45f", APIVersion:"v1", ResourceVersion:"21129", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\nI0225 03:15:24.591939       1 aws_loadbalancer.go:1375] Instances added to load-balancer a38ef816dd8544a02a503ab8be1d97fa\nI0225 03:15:24.610164       1 aws_loadbalancer.go:1386] Instances removed from load-balancer a38ef816dd8544a02a503ab8be1d97fa\nI0225 03:15:24.996946       1 controller.go:669] Successfully updated 2 out of 2 load balancers to direct traffic to the updated set of nodes\nI0225 03:15:24.997057       1 event.go:281] Event(v1.ObjectReference{Kind:"Service", Namespace:"openshift-ingress", Name:"router-default", UID:"38ef816d-d854-4a02-a503-ab8be1d97fa0", APIVersion:"v1", ResourceVersion:"13003", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\nI0225 03:15:31.022640       1 garbagecollector.go:404] processing item [v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-operator-lock, uid: 85578cc4-1f73-4aea-a409-d99a0956f3e8]\nI0225 03:15:31.032514       1 garbagecollector.go:517] delete object [v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-operator-lock, uid: 85578cc4-1f73-4aea-a409-d99a0956f3e8] with propagation policy Background\n
Feb 25 03:17:55.659 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-133-131.ec2.internal node/ip-10-0-133-131.ec2.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): 64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=24145&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0225 02:56:04.990573       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=24145&timeout=7m54s&timeoutSeconds=474&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0225 02:56:12.035350       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0225 02:56:12.035413       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nI0225 03:12:29.458290       1 leaderelection.go:252] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock\nI0225 03:12:29.462848       1 csrcontroller.go:81] Starting CSR controller\nI0225 03:12:29.462874       1 shared_informer.go:197] Waiting for caches to sync for CSRController\nI0225 03:12:29.462905       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"85122f62-2405-459c-99d4-d4f8d687b3ff", APIVersion:"v1", ResourceVersion:"38122", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 7e79190f-981f-4261-a697-18ecd9c6c142 became leader\nI0225 03:12:29.767096       1 shared_informer.go:204] Caches are synced for CSRController \nI0225 03:15:33.417816       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0225 03:15:33.418566       1 csrcontroller.go:83] Shutting down CSR controller\nI0225 03:15:33.418627       1 csrcontroller.go:85] CSR controller shut down\nF0225 03:15:33.419441       1 builder.go:210] server exited\n
Feb 25 03:17:55.676 E ns/openshift-apiserver pod/apiserver-2qznq node/ip-10-0-133-131.ec2.internal container=openshift-apiserver container exited with code 1 (Error): /10.0.156.20:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.156.20:2379: connect: connection refused". Reconnecting...\nW0225 03:14:51.314439       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.156.20:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.156.20:2379: connect: connection refused". Reconnecting...\nW0225 03:14:52.333948       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.156.20:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.156.20:2379: connect: connection refused". Reconnecting...\nW0225 03:14:52.845859       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.156.20:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.156.20:2379: connect: connection refused". Reconnecting...\nW0225 03:14:53.357615       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.156.20:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.156.20:2379: connect: connection refused". Reconnecting...\nW0225 03:15:06.830621       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.156.20:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.156.20:2379: i/o timeout". Reconnecting...\nW0225 03:15:16.480020       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.12.64:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.12.64:2379: i/o timeout". Reconnecting...\nW0225 03:15:18.445639       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.12.64:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.12.64:2379: connect: connection timed out". Reconnecting...\n
Feb 25 03:17:55.707 E ns/openshift-machine-config-operator pod/machine-config-daemon-gt6mz node/ip-10-0-133-131.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 25 03:17:55.730 E ns/openshift-controller-manager pod/controller-manager-hngvq node/ip-10-0-133-131.ec2.internal container=controller-manager container exited with code 1 (Error): ymentconfig controller caches are synced. Starting workers.\nE0225 03:13:46.863187       1 reflector.go:156] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to list *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io)\nE0225 03:13:46.863412       1 reflector.go:156] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0225 03:13:46.863624       1 reflector.go:156] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to list *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io)\nE0225 03:13:49.935761       1 reflector.go:320] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.Image: the server is currently unable to handle the request (get images.image.openshift.io)\nE0225 03:13:49.935907       1 reflector.go:320] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io)\nE0225 03:13:49.936543       1 reflector.go:320] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0225 03:13:53.005748       1 reflector.go:156] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to list *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0225 03:13:56.078742       1 reflector.go:320] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\n
Feb 25 03:17:55.754 E ns/openshift-machine-config-operator pod/machine-config-server-cs2lc node/ip-10-0-133-131.ec2.internal container=machine-config-server container exited with code 2 (Error): I0225 03:09:02.150181       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-295-gaf490f2d-dirty (af490f2d8ce4154905cb6e2a93de44ff4f142baa)\nI0225 03:09:02.151560       1 api.go:51] Launching server on :22624\nI0225 03:09:02.151726       1 api.go:51] Launching server on :22623\n
Feb 25 03:17:55.775 E ns/openshift-monitoring pod/node-exporter-k2kg9 node/ip-10-0-133-131.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:14:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:14:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:14:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:14:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:14:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:15:06Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-25T03:15:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 25 03:17:55.802 E ns/openshift-multus pod/multus-admission-controller-wxdhp node/ip-10-0-133-131.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 25 03:17:55.856 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-133-131.ec2.internal node/ip-10-0-133-131.ec2.internal container=scheduler container exited with code 2 (Error): le.\nI0225 03:15:22.108734       1 scheduler.go:751] pod openshift-service-ca-operator/service-ca-operator-85bcb85fb8-kcmgt is bound successfully on node "ip-10-0-156-20.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0225 03:15:22.486671       1 scheduler.go:751] pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-59dcf94b67-8fq52 is bound successfully on node "ip-10-0-156-20.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0225 03:15:22.727868       1 scheduler.go:751] pod openshift-service-ca/service-ca-685ff7556f-wdjbs is bound successfully on node "ip-10-0-142-81.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0225 03:15:22.988520       1 scheduler.go:751] pod openshift-authentication-operator/authentication-operator-64455864cb-x5hhj is bound successfully on node "ip-10-0-142-81.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0225 03:15:23.298554       1 scheduler.go:751] pod openshift-cloud-credential-operator/cloud-credential-operator-6f8fd4c79-czlwk is bound successfully on node "ip-10-0-156-20.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0225 03:15:23.670819       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-f89fff966-c8mtc: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0225 03:15:30.675098       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-f89fff966-c8mtc: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0225 03:15:32.765052       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-f89fff966-c8mtc: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Feb 25 03:17:55.907 E ns/openshift-multus pod/multus-kcksm node/ip-10-0-133-131.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 25 03:17:58.164 E ns/openshift-marketplace pod/community-operators-5b9ccfdf76-pftm6 node/ip-10-0-137-12.ec2.internal container=community-operators container exited with code 2 (Error): 
Feb 25 03:18:00.388 E ns/openshift-monitoring pod/node-exporter-k2kg9 node/ip-10-0-133-131.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:18:01.387 E ns/openshift-multus pod/multus-kcksm node/ip-10-0-133-131.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:18:05.402 E ns/openshift-multus pod/multus-kcksm node/ip-10-0-133-131.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 25 03:18:07.037 E clusteroperator/authentication changed Degraded to True: OAuthClients_Error: OAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-challenging-client)
Feb 25 03:18:11.489 E ns/openshift-machine-config-operator pod/machine-config-daemon-gt6mz node/ip-10-0-133-131.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 25 03:24:08.842 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator authentication is reporting a failure: OAuthClientsDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io openshift-challenging-client)