ResultFAILURE
Tests 3 failed / 24 succeeded
Started2020-05-28 11:56
Elapsed2h10m
Work namespaceci-op-3l9y6gwr
Refs openshift-4.5:e78ee7f2
50:9d4389c5
pod3395ea5c-a0da-11ea-b706-0a580a80043d
repoopenshift/etcd
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 1h25m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
75 error level events were detected during this test run:

May 28 12:28:07.379 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0528 12:28:07.207624       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:28:07.207632       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:28:07.207637       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:28:07.207643       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:28:07.207649       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:28:07.207655       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:28:07.207681       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:28:07.207694       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:28:07.207700       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:28:07.207707       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:28:07.207713       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:28:07.207719       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:28:07.207725       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:28:07.207731       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:28:07.207753       1 server.go:681] external host was not specified, using 10.0.213.221\nI0528 12:28:07.207902       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:28:07.208139       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:28:41.573 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0528 12:28:41.131259       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:28:41.131274       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:28:41.131284       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:28:41.131295       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:28:41.131305       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:28:41.131313       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:28:41.131327       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:28:41.131339       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:28:41.131350       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:28:41.131361       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:28:41.131370       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:28:41.131379       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:28:41.131406       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:28:41.131418       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:28:41.131454       1 server.go:681] external host was not specified, using 10.0.213.221\nI0528 12:28:41.131643       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:28:41.131926       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:29:06.690 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/kube-controller-manager container exited with code 255 (Error): :29:06.221157       1 reflector.go:181] Stopping reflector *v1.BrokerTemplateInstance (10m0s) from github.com/openshift/client-go/template/informers/externalversions/factory.go:101\nI0528 12:29:06.178194       1 job_controller.go:156] Shutting down job controller\nI0528 12:29:06.221193       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (21h0m20.466727775s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0528 12:29:06.178201       1 serviceaccounts_controller.go:129] Shutting down service account controller\nI0528 12:29:06.221145       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (21h0m20.466727775s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0528 12:29:06.178202       1 deployment_controller.go:165] Shutting down deployment controller\nI0528 12:29:06.178213       1 attach_detach_controller.go:374] Shutting down attach detach controller\nI0528 12:29:06.221270       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (13h10m39.526444808s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0528 12:29:06.178221       1 daemon_controller.go:282] Shutting down daemon sets controller\nI0528 12:29:06.178229       1 endpoints_controller.go:199] Shutting down endpoint controller\nI0528 12:29:06.221300       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (21h0m20.466727775s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0528 12:29:06.178229       1 certificate_controller.go:131] Shutting down certificate controller "csrapproving"\nI0528 12:29:06.221330       1 reflector.go:181] Stopping reflector *v1.ReplicationController (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0528 12:29:06.221362       1 reflector.go:181] Stopping reflector *v1.HorizontalPodAutoscaler (15s) from k8s.io/client-go/informers/factory.go:135\nI0528 12:29:06.221427       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (21h0m20.466727775s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\n
May 28 12:29:06.690 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/kube-scheduler container exited with code 255 (Error): ?allowWatchBookmarks=true&resourceVersion=17327&timeout=6m9s&timeoutSeconds=369&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:29:05.179802       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=17319&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:29:05.183106       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=23842&timeout=8m23s&timeoutSeconds=503&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:29:05.184112       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=17319&timeout=9m48s&timeoutSeconds=588&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:29:05.185710       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=23471&timeout=6m34s&timeoutSeconds=394&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:29:05.186950       1 reflector.go:382] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=17523&timeout=8m44s&timeoutSeconds=524&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0528 12:29:06.087088       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0528 12:29:06.087128       1 server.go:244] leaderelection lost\n
May 28 12:31:26.838 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 577)
May 28 12:32:40.246 E kube-apiserver Kube API started failing: Get https://api.ci-op-3l9y6gwr-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
May 28 12:32:50.315 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 1.compute.internal\n2020-05-28 12:32:49.642557 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 12:32:49.642592 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 12:32:49.642622 I | etcdserver: heartbeat = 100ms\n2020-05-28 12:32:49.642651 I | etcdserver: election = 1000ms\n2020-05-28 12:32:49.642680 I | etcdserver: snapshot count = 100000\n2020-05-28 12:32:49.642715 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 12:32:49.839288 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 12:32:49.843835 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 12:32:49.843869 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 12:32:49.846259 I | mvcc: restore compact to 20558\n2020-05-28 12:32:49.883553 W | auth: simple token is not cryptographically signed\n2020-05-28 12:32:49.885359 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:32:49.886039 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:32:49.886076 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:32:49.886108 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:32:49.887157 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:32:49.887377 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:32:49.887746 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:32:49.887782 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 12:32:49.887812 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:32:49.887927 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:32:49.887998 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 12:32:53.397 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 85ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 12:32:52.904509 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 12:32:52.904535 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 12:32:52.906107 I | mvcc: restore compact to 20558\n2020-05-28 12:32:52.968329 W | auth: simple token is not cryptographically signed\n2020-05-28 12:32:52.970300 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:32:52.971635 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:32:52.971735 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:32:52.971909 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:32:52.972679 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:32:52.973201 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:32:52.973695 I | embed: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.key, ca = , trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = \n2020-05-28 12:32:52.975101 I | embed: listening for metrics on https://0.0.0.0:9978\n2020-05-28 12:32:52.976852 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:32:52.976888 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:32:52.977035 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:32:52.977100 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n2020-05-28 12:32:52.977163 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n
May 28 12:33:02.365 E ns/openshift-machine-api pod/machine-api-operator-795c955978-z624t node/ip-10-0-131-111.us-west-1.compute.internal container/machine-api-operator container exited with code 2 (Error): 
May 28 12:33:09.479 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): c993 became follower at term 6\n2020-05-28 12:33:08.416613 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 12:33:08.418136 I | mvcc: restore compact to 20558\n2020-05-28 12:33:08.469231 W | auth: simple token is not cryptographically signed\n2020-05-28 12:33:08.471487 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:33:08.472545 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:33:08.472655 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:33:08.472730 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:33:08.474493 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:33:08.474561 I | embed: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.key, ca = , trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = \n2020-05-28 12:33:08.474692 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:33:08.474731 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:33:08.474826 I | rafthttp: started HTTP pipelining with peer 9b634b5c21f96fa4\n2020-05-28 12:33:08.474899 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 12:33:08.475211 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream Message reader)\n2020-05-28 12:33:08.475286 I | embed: listening for metrics on https://0.0.0.0:9978\n2020-05-28 12:33:08.475496 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:33:08.475577 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 12:33:17.455 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-111.us-west-1.compute.internal node/ip-10-0-131-111.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0528 12:33:15.712711       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:33:15.712726       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:33:15.712746       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:33:15.712756       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:33:15.712765       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:33:15.712773       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:33:15.712788       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:33:15.712799       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:33:15.712829       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:33:15.712840       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:33:15.712851       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:33:15.712862       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:33:15.712873       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:33:15.712885       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:33:15.712921       1 server.go:681] external host was not specified, using 10.0.131.111\nI0528 12:33:15.713135       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:33:15.713385       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:33:17.543 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0528 12:33:16.692579       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0528 12:33:16.694709       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0528 12:33:16.694831       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0528 12:33:16.695479       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 28 12:33:29.612 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0528 12:33:28.919903       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0528 12:33:28.923218       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0528 12:33:28.923268       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0528 12:33:28.923947       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 28 12:33:32.525 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-111.us-west-1.compute.internal node/ip-10-0-131-111.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0528 12:33:32.020510       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:33:32.020519       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:33:32.020524       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:33:32.020530       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:33:32.020536       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:33:32.020541       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:33:32.020549       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:33:32.020555       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:33:32.020561       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:33:32.020568       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:33:32.020573       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:33:32.020579       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:33:32.020585       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:33:32.020590       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:33:32.020614       1 server.go:681] external host was not specified, using 10.0.131.111\nI0528 12:33:32.020762       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:33:32.021040       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:33:37.728 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 1.compute.internal\n2020-05-28 12:33:36.937182 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 12:33:36.937192 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 12:33:36.937199 I | etcdserver: heartbeat = 100ms\n2020-05-28 12:33:36.937205 I | etcdserver: election = 1000ms\n2020-05-28 12:33:36.937212 I | etcdserver: snapshot count = 100000\n2020-05-28 12:33:36.937227 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 12:33:37.124631 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 12:33:37.125715 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 12:33:37.125737 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 12:33:37.127218 I | mvcc: restore compact to 20558\n2020-05-28 12:33:37.163197 W | auth: simple token is not cryptographically signed\n2020-05-28 12:33:37.165204 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:33:37.166588 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:33:37.166627 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:33:37.166659 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:33:37.167149 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:33:37.167400 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:33:37.168080 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:33:37.168148 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:33:37.168195 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 12:33:37.168268 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:33:37.168311 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 12:34:02.704 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-111.us-west-1.compute.internal node/ip-10-0-131-111.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0528 12:34:01.933136       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:34:01.933145       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:34:01.933151       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:34:01.933156       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:34:01.933162       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:34:01.933167       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:34:01.933175       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:34:01.933181       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:34:01.933188       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:34:01.933194       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:34:01.933202       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:34:01.933207       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:34:01.933213       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:34:01.933219       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:34:01.933242       1 server.go:681] external host was not specified, using 10.0.131.111\nI0528 12:34:01.933428       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:34:01.933695       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:34:27.796 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 35530cb1a at commit index 27805\n2020-05-28 12:34:27.278172 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 12:34:27.278199 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 12:34:27.279745 I | mvcc: restore compact to 20558\n2020-05-28 12:34:27.325136 W | auth: simple token is not cryptographically signed\n2020-05-28 12:34:27.327050 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:34:27.328024 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:34:27.328064 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:34:27.328113 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:34:27.328851 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:34:27.329308 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:34:27.329814 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:34:27.329846 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:34:27.329866 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 12:34:27.329925 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream Message reader)\n2020-05-28 12:34:27.330070 I | embed: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.key, ca = , trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = \n2020-05-28 12:34:27.330167 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:34:27.330224 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 12:34:30.815 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-111.us-west-1.compute.internal node/ip-10-0-131-111.us-west-1.compute.internal container/kube-scheduler container exited with code 255 (Error): tentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=22360&timeout=6m53s&timeoutSeconds=413&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:34:30.137370       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=22364&timeout=8m10s&timeoutSeconds=490&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:34:30.139449       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=22364&timeout=5m0s&timeoutSeconds=300&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:34:30.148486       1 reflector.go:382] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=27033&timeoutSeconds=491&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:34:30.152048       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=26963&timeout=8m28s&timeoutSeconds=508&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:34:30.155120       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=23471&timeout=5m49s&timeoutSeconds=349&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0528 12:34:30.462163       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0528 12:34:30.462203       1 server.go:244] leaderelection lost\n
May 28 12:34:41.599 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-161-65.us-west-1.compute.internal node/ip-10-0-161-65.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0528 12:34:40.793860       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0528 12:34:40.797136       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0528 12:34:40.798120       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\nI0528 12:34:40.797338       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
May 28 12:34:57.004 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-111.us-west-1.compute.internal node/ip-10-0-131-111.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): 2.312803       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:kube-controller-manager" cannot list resource "endpoints" in API group "" at the cluster scope\nE0528 12:34:52.312859       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" at the cluster scope\nE0528 12:34:52.312939       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.ResourceQuota: resourcequotas is forbidden: User "system:kube-controller-manager" cannot list resource "resourcequotas" in API group "" at the cluster scope\nE0528 12:34:52.312996       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-controller-manager" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope\nE0528 12:34:52.313074       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.ConfigMap: configmaps is forbidden: User "system:kube-controller-manager" cannot list resource "configmaps" in API group "" at the cluster scope\nE0528 12:34:52.313126       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope\nE0528 12:34:52.313176       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1beta1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope\nI0528 12:34:56.233115       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0528 12:34:56.233188       1 policy_controller.go:94] leaderelection lost\n
May 28 12:34:59.742 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-161-65.us-west-1.compute.internal node/ip-10-0-161-65.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0528 12:34:58.812223       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0528 12:34:58.814340       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0528 12:34:58.814414       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0528 12:34:58.815049       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 28 12:35:09.721 E ns/openshift-machine-api pod/machine-api-controllers-ddc57b8f5-sfxb7 node/ip-10-0-161-65.us-west-1.compute.internal container/machineset-controller container exited with code 1 (Error): 
May 28 12:35:13.941 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-5768dbb748-ffgqf node/ip-10-0-213-221.us-west-1.compute.internal container/kube-storage-version-migrator-operator container exited with code 1 (Error): e"}],"versions":[{"name":"operator","version":"0.0.1-2020-05-28-115644"},{"name":"kube-storage-version-migrator","version":""}]}}\nI0528 12:21:29.284198       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"b44c03c4-b9b9-44ae-8496-a5919e39c9e8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0528 12:32:34.084856       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"b44c03c4-b9b9-44ae-8496-a5919e39c9e8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'VersionMappingFailure' unable to get version mapping: etcdserver: leader changed\nI0528 12:35:13.085210       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0528 12:35:13.085477       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0528 12:35:13.085572       1 builder.go:248] server exited\nI0528 12:35:13.085646       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0528 12:35:13.085694       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0528 12:35:13.085755       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0528 12:35:13.085782       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nI0528 12:35:13.085801       1 base_controller.go:101] Shutting down LoggingSyncer ...\nW0528 12:35:13.085846       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
May 28 12:35:17.742 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::StaticPods_Error: StaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is not ready: CrashLoopBackOff: back-off 1m20s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nStaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-213-221.us-west-1.compute.internal is unhealthy
May 28 12:35:31.195 E ns/openshift-cluster-machine-approver pod/machine-approver-7cd74d56d7-brjn4 node/ip-10-0-131-111.us-west-1.compute.internal container/machine-approver-controller container exited with code 2 (Error): 362&timeoutSeconds=322&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0528 12:34:45.116802       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=22404&timeoutSeconds=550&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0528 12:34:45.117842       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=22362&timeoutSeconds=488&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0528 12:34:46.117455       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=22404&timeoutSeconds=525&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0528 12:34:46.118266       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=22362&timeoutSeconds=489&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0528 12:34:52.341227       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: unknown (get clusteroperators.config.openshift.io)\nE0528 12:34:52.355396       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io)\n
May 28 12:35:32.068 E ns/openshift-kube-storage-version-migrator pod/migrator-858f574484-fxldw node/ip-10-0-169-79.us-west-1.compute.internal container/migrator container exited with code 2 (Error): 
May 28 12:35:43.074 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-6d585c7775-mp775 node/ip-10-0-169-79.us-west-1.compute.internal container/operator container exited with code 255 (Error): ator at 21.685162ms\nI0528 12:35:29.196776       1 operator.go:145] Starting syncing operator at 2020-05-28 12:35:29.196768028 +0000 UTC m=+840.453226825\nI0528 12:35:29.221534       1 operator.go:147] Finished syncing operator at 24.757425ms\nI0528 12:35:29.242029       1 operator.go:145] Starting syncing operator at 2020-05-28 12:35:29.242016995 +0000 UTC m=+840.498475960\nI0528 12:35:29.550596       1 operator.go:147] Finished syncing operator at 308.569264ms\nI0528 12:35:33.838121       1 operator.go:145] Starting syncing operator at 2020-05-28 12:35:33.838107337 +0000 UTC m=+845.094566320\nI0528 12:35:33.866517       1 operator.go:147] Finished syncing operator at 28.402775ms\nI0528 12:35:33.870844       1 operator.go:145] Starting syncing operator at 2020-05-28 12:35:33.870837151 +0000 UTC m=+845.127296116\nI0528 12:35:33.908509       1 operator.go:147] Finished syncing operator at 37.66507ms\nI0528 12:35:33.908548       1 operator.go:145] Starting syncing operator at 2020-05-28 12:35:33.908543723 +0000 UTC m=+845.165002490\nI0528 12:35:33.935403       1 operator.go:147] Finished syncing operator at 26.853534ms\nI0528 12:35:33.935438       1 operator.go:145] Starting syncing operator at 2020-05-28 12:35:33.935433988 +0000 UTC m=+845.191892740\nI0528 12:35:34.250849       1 operator.go:147] Finished syncing operator at 315.402289ms\nI0528 12:35:38.353391       1 operator.go:145] Starting syncing operator at 2020-05-28 12:35:38.35338056 +0000 UTC m=+849.609839361\nI0528 12:35:38.400676       1 operator.go:147] Finished syncing operator at 47.285171ms\nI0528 12:35:38.400728       1 operator.go:145] Starting syncing operator at 2020-05-28 12:35:38.400722783 +0000 UTC m=+849.657181542\nI0528 12:35:38.455343       1 operator.go:147] Finished syncing operator at 54.612346ms\nI0528 12:35:42.283575       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0528 12:35:42.283998       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0528 12:35:42.284036       1 builder.go:210] server exited\n
May 28 12:35:59.609 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0528 12:35:43.812761       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:35:43.812801       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:35:43.812835       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:35:43.812879       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:35:43.812917       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:35:43.812948       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:35:43.812987       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:35:43.813021       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:35:43.813054       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:35:43.813087       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:35:43.813124       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:35:43.813157       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:35:43.813190       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:35:43.813223       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:35:43.813295       1 server.go:681] external host was not specified, using 10.0.213.221\nI0528 12:35:43.813748       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:35:43.816535       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:36:01.717 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): client-ca/ca-bundle.crt, client-cert-auth = true, crl-file = \n2020-05-28 12:36:00.525661 I | embed: listening for peers on https://0.0.0.0:2380\n2020-05-28 12:36:00.525877 I | embed: listening for client requests on 0.0.0.0:2379\n2020-05-28 12:36:00.611598 I | etcdserver: name = ip-10-0-213-221.us-west-1.compute.internal\n2020-05-28 12:36:00.611626 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 12:36:00.611635 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 12:36:00.611642 I | etcdserver: heartbeat = 100ms\n2020-05-28 12:36:00.611648 I | etcdserver: election = 1000ms\n2020-05-28 12:36:00.611654 I | etcdserver: snapshot count = 100000\n2020-05-28 12:36:00.611670 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 12:36:00.904742 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 12:36:00.906280 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 12:36:00.906351 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 12:36:00.908448 I | mvcc: restore compact to 20558\n2020-05-28 12:36:00.988746 W | auth: simple token is not cryptographically signed\n2020-05-28 12:36:00.991358 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:36:00.992641 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:36:00.992730 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:36:00.992817 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:36:00.993988 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:36:00.994568 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:36:00.994719 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:36:00.994781 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 12:36:04.138 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-169-79.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): 2020/05/28 12:21:53 Watching directory: "/etc/alertmanager/config"\n
May 28 12:36:04.138 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-169-79.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/05/28 12:21:53 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/28 12:21:53 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/28 12:21:53 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/28 12:21:53 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/05/28 12:21:53 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/28 12:21:53 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/28 12:21:53 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/28 12:21:53 http.go:107: HTTPS: listening on [::]:9095\nI0528 12:21:53.921834       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
May 28 12:36:09.813 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-111.us-west-1.compute.internal node/ip-10-0-131-111.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0528 12:36:09.069529       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0528 12:36:09.072409       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0528 12:36:09.072882       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0528 12:36:09.073944       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 28 12:36:15.162 E ns/openshift-controller-manager pod/controller-manager-qs4br node/ip-10-0-161-65.us-west-1.compute.internal container/controller-manager container exited with code 137 (Error): I0528 12:16:56.720744       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (554623c)\nI0528 12:16:56.722846       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-3l9y6gwr/stable-initial@sha256:8b556f008b644e3e74ca8ea556680530c35374e4503d2de808c820b84da2dc55"\nI0528 12:16:56.722924       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-3l9y6gwr/stable-initial@sha256:579be1a4b551c32690f221641c5f4c18a54022e4571a45055696b3bada85fd1a"\nI0528 12:16:56.723043       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0528 12:16:56.723165       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
May 28 12:36:22.917 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0528 12:36:22.341384       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:36:22.341448       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:36:22.341495       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:36:22.341539       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:36:22.341576       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:36:22.341610       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:36:22.341650       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:36:22.341689       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:36:22.341722       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:36:22.341756       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:36:22.341789       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:36:22.341825       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:36:22.341858       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:36:22.341889       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:36:22.341947       1 server.go:681] external host was not specified, using 10.0.213.221\nI0528 12:36:22.342208       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:36:22.342557       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:36:28.030 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-111.us-west-1.compute.internal node/ip-10-0-131-111.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0528 12:36:26.944226       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0528 12:36:26.945956       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0528 12:36:26.946431       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\nI0528 12:36:26.946716       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
May 28 12:36:42.754 E ns/openshift-monitoring pod/thanos-querier-75df5c78c7-rpbmd node/ip-10-0-194-113.us-west-1.compute.internal container/oauth-proxy container exited with code 2 (Error): 8 12:21:56 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/28 12:21:56 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/28 12:21:56 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/05/28 12:21:56 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/28 12:21:56 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/05/28 12:21:56 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/28 12:21:56 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/05/28 12:21:56 http.go:107: HTTPS: listening on [::]:9091\nI0528 12:21:56.891115       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/28 12:23:25 oauthproxy.go:774: basicauth: 10.128.0.3:59898 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:25:25 oauthproxy.go:774: basicauth: 10.128.0.3:33024 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:26:25 oauthproxy.go:774: basicauth: 10.128.0.3:33688 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:27:25 oauthproxy.go:774: basicauth: 10.128.0.3:47132 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:29:25 oauthproxy.go:774: basicauth: 10.128.0.3:49916 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:34:02 oauthproxy.go:774: basicauth: 10.129.0.48:45096 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:34:02 oauthproxy.go:774: basicauth: 10.129.0.48:45096 Authorization header does not start with 'Basic', skipping basic authentication\n
May 28 12:36:45.040 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0528 12:36:44.164760       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:36:44.164768       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:36:44.164774       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:36:44.164780       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:36:44.164786       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:36:44.164791       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:36:44.164799       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:36:44.164805       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:36:44.164811       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:36:44.164818       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:36:44.164824       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:36:44.164830       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:36:44.164836       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:36:44.164842       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:36:44.164864       1 server.go:681] external host was not specified, using 10.0.213.221\nI0528 12:36:44.165020       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:36:44.165261       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:36:57.015 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-169-79.us-west-1.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-28T12:36:40.622Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-28T12:36:40.627Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-28T12:36:40.629Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-28T12:36:40.630Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-28T12:36:40.631Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-28T12:36:40.631Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-28T12:36:40.631Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-28T12:36:40.631Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-28T12:36:40.631Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-28T12:36:40.631Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-28T12:36:40.632Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-28T12:36:40.632Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-28T12:36:40.632Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-28T12:36:40.632Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-28T12:36:40.635Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-28T12:36:40.635Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-28
May 28 12:36:58.097 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/kube-controller-manager container exited with code 255 (Error): true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:36:57.084112       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=24176&timeout=6m51s&timeoutSeconds=411&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0528 12:36:57.085342       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0528 12:36:57.085482       1 controllermanager.go:291] leaderelection lost\nI0528 12:36:57.148322       1 reflector.go:181] Stopping reflector *v1.Lease (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0528 12:36:57.148360       1 reflector.go:181] Stopping reflector *v1.HorizontalPodAutoscaler (15s) from k8s.io/client-go/informers/factory.go:135\nI0528 12:36:57.148373       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (23h56m26.983544063s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0528 12:36:57.148548       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (23h56m26.983544063s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0528 12:36:57.148592       1 reflector.go:181] Stopping reflector *v1.Job (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0528 12:36:57.148610       1 reflector.go:181] Stopping reflector *v1.ValidatingWebhookConfiguration (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0528 12:36:57.148649       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (23h56m26.983544063s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0528 12:36:57.148675       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (23h56m26.983544063s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0528 12:36:57.148705       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (23h56m26.983544063s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\n
May 28 12:36:59.166 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/kube-scheduler container exited with code 255 (Error): ect: connection refused\nE0528 12:36:57.094301       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=24185&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:36:57.109180       1 reflector.go:382] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=26286&timeout=7m35s&timeoutSeconds=455&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:36:57.112960       1 reflector.go:382] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=26286&timeout=8m30s&timeoutSeconds=510&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:36:57.114297       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=24180&timeout=5m40s&timeoutSeconds=340&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0528 12:36:57.135907       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=31059&timeout=5m22s&timeoutSeconds=322&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0528 12:36:58.028725       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0528 12:36:58.028759       1 server.go:244] leaderelection lost\n
May 28 12:37:05.476 E ns/openshift-monitoring pod/thanos-querier-75df5c78c7-d2r7j node/ip-10-0-146-13.us-west-1.compute.internal container/oauth-proxy container exited with code 2 (Error): 528 12:21:54.538649       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/28 12:22:25 oauthproxy.go:774: basicauth: 10.128.0.3:59148 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:24:25 oauthproxy.go:774: basicauth: 10.128.0.3:60566 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:28:25 oauthproxy.go:774: basicauth: 10.128.0.3:48046 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:30:25 oauthproxy.go:774: basicauth: 10.128.0.3:50672 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:31:25 oauthproxy.go:774: basicauth: 10.128.0.3:51406 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:32:03 oauthproxy.go:774: basicauth: 10.129.0.48:36924 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:33:02 oauthproxy.go:774: basicauth: 10.129.0.48:42542 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:33:02 oauthproxy.go:774: basicauth: 10.129.0.48:42542 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:35:02 oauthproxy.go:774: basicauth: 10.129.0.48:48966 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:35:02 oauthproxy.go:774: basicauth: 10.129.0.48:48966 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:36:03 oauthproxy.go:774: basicauth: 10.129.0.48:51902 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:36:03 oauthproxy.go:774: basicauth: 10.129.0.48:51902 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/28 12:37:02 oauthproxy.go:774: basicauth: 10.129.0.48:60888 Authorization header does not start with 'Basic', skipping basic authentication\n
May 28 12:37:05.509 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-146-13.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): 2020/05/28 12:21:40 Watching directory: "/etc/alertmanager/config"\n
May 28 12:37:05.509 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-146-13.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/05/28 12:21:40 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/28 12:21:40 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/28 12:21:40 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/28 12:21:40 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/05/28 12:21:40 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/28 12:21:40 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/28 12:21:40 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0528 12:21:40.831658       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/28 12:21:40 http.go:107: HTTPS: listening on [::]:9095\n
May 28 12:37:14.172 E ns/openshift-monitoring pod/node-exporter-rscfc node/ip-10-0-213-221.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): -28T12:18:11Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-05-28T12:18:11Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
May 28 12:37:19.014 E ns/openshift-marketplace pod/redhat-marketplace-875bdcbcc-scxq4 node/ip-10-0-169-79.us-west-1.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
May 28 12:37:20.122 E ns/openshift-marketplace pod/redhat-operators-585856bbcd-qdfpt node/ip-10-0-169-79.us-west-1.compute.internal container/redhat-operators container exited with code 2 (Error): 
May 28 12:37:21.274 E ns/openshift-monitoring pod/node-exporter-ngbsj node/ip-10-0-131-111.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): -28T12:17:06Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-05-28T12:17:06Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
May 28 12:37:21.958 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-194-113.us-west-1.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-28T12:37:18.164Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-28T12:37:18.167Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-28T12:37:18.168Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-28T12:37:18.169Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-28T12:37:18.169Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-28T12:37:18.169Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-28T12:37:18.169Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-28T12:37:18.169Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-28T12:37:18.169Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-28T12:37:18.169Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-28T12:37:18.169Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-28T12:37:18.169Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-28T12:37:18.169Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-28T12:37:18.169Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-28T12:37:18.171Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-28T12:37:18.171Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-28
May 28 12:37:44.797 E clusterversion/version changed Failing to True: WorkloadNotAvailable: could not find the deployment openshift-console/downloads during rollout
May 28 12:38:36.211 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-161-65.us-west-1.compute.internal node/ip-10-0-161-65.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0528 12:38:34.373048       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:38:34.373063       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:38:34.373073       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:38:34.373084       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:38:34.373094       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:38:34.373102       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:38:34.373118       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:38:34.373138       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:38:34.373149       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:38:34.373161       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:38:34.373172       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:38:34.373184       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:38:34.373194       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:38:34.373204       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:38:34.373242       1 server.go:681] external host was not specified, using 10.0.161.65\nI0528 12:38:34.373408       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:38:34.373653       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:38:50.711 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 1.compute.internal\n2020-05-28 12:38:50.126631 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 12:38:50.126641 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 12:38:50.126648 I | etcdserver: heartbeat = 100ms\n2020-05-28 12:38:50.126654 I | etcdserver: election = 1000ms\n2020-05-28 12:38:50.126661 I | etcdserver: snapshot count = 100000\n2020-05-28 12:38:50.126673 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 12:38:50.319680 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 12:38:50.320796 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 12:38:50.320819 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 12:38:50.322514 I | mvcc: restore compact to 20558\n2020-05-28 12:38:50.362871 W | auth: simple token is not cryptographically signed\n2020-05-28 12:38:50.365095 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:38:50.365864 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:38:50.365906 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:38:50.365946 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:38:50.366617 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:38:50.366972 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:38:50.371495 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:38:50.371542 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:38:50.371616 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 12:38:50.371668 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:38:50.371691 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 12:38:57.299 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-161-65.us-west-1.compute.internal node/ip-10-0-161-65.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0528 12:38:56.624395       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:38:56.624404       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:38:56.624410       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:38:56.624415       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:38:56.624421       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:38:56.624426       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:38:56.624434       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:38:56.624440       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:38:56.624446       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:38:56.624452       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:38:56.624458       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:38:56.624465       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:38:56.624471       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:38:56.624476       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:38:56.624497       1 server.go:681] external host was not specified, using 10.0.161.65\nI0528 12:38:56.624644       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:38:56.624846       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:39:20.407 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-161-65.us-west-1.compute.internal node/ip-10-0-161-65.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0528 12:39:19.634038       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0528 12:39:19.634046       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0528 12:39:19.634052       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0528 12:39:19.634057       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0528 12:39:19.634063       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0528 12:39:19.634068       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0528 12:39:19.634076       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0528 12:39:19.634081       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0528 12:39:19.634087       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0528 12:39:19.634093       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0528 12:39:19.634099       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0528 12:39:19.634105       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0528 12:39:19.634111       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0528 12:39:19.634116       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0528 12:39:19.634139       1 server.go:681] external host was not specified, using 10.0.161.65\nI0528 12:39:19.634293       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0528 12:39:19.634557       1 server.go:193] Version: v1.18.3\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 28 12:44:05.946 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): dserver: snapshot count = 100000\n2020-05-28 12:44:05.053052 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 12:44:05.243168 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 12:44:05.244349 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 12:44:05.244373 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 12:44:05.246081 I | mvcc: restore compact to 20558\n2020-05-28 12:44:05.285895 W | auth: simple token is not cryptographically signed\n2020-05-28 12:44:05.288189 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:44:05.289174 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:44:05.289210 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:44:05.289244 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:44:05.289963 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:44:05.290436 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:44:05.290865 I | embed: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.key, ca = , trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = \n2020-05-28 12:44:05.290919 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:44:05.290953 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:44:05.291057 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:44:05.291085 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 12:49:11.111 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): r peers on https://0.0.0.0:2380\n2020-05-28 12:49:10.027846 I | embed: listening for client requests on 0.0.0.0:2379\n2020-05-28 12:49:10.059727 I | etcdserver: name = ip-10-0-213-221.us-west-1.compute.internal\n2020-05-28 12:49:10.059756 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 12:49:10.059765 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 12:49:10.059773 I | etcdserver: heartbeat = 100ms\n2020-05-28 12:49:10.059779 I | etcdserver: election = 1000ms\n2020-05-28 12:49:10.059786 I | etcdserver: snapshot count = 100000\n2020-05-28 12:49:10.059809 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 12:49:10.265158 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 12:49:10.266299 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 12:49:10.266319 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 12:49:10.268278 I | mvcc: restore compact to 20558\n2020-05-28 12:49:10.306247 W | auth: simple token is not cryptographically signed\n2020-05-28 12:49:10.309095 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:49:10.309946 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:49:10.309981 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:49:10.310017 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:49:10.311241 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:49:10.311639 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:49:10.311701 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 12:49:10.311765 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:49:10.311793 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 12:50:27.300 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is reporting a failure: StaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is not ready: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nStaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is waiting: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-213-221.us-west-1.compute.internal is unhealthy
May 28 12:54:23.353 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error):  mvcc: restore compact to 20558\n2020-05-28 12:54:22.243699 W | auth: simple token is not cryptographically signed\n2020-05-28 12:54:22.245796 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:54:22.246541 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:54:22.246583 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:54:22.246651 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:54:22.247176 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:54:22.247775 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:54:22.248242 I | embed: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.key, ca = , trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = \n2020-05-28 12:54:22.248454 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:54:22.248488 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:54:22.248550 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 12:54:22.248708 I | rafthttp: started HTTP pipelining with peer 9b634b5c21f96fa4\n2020-05-28 12:54:22.248840 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream Message reader)\n2020-05-28 12:54:22.248863 I | embed: listening for metrics on https://0.0.0.0:9978\n2020-05-28 12:54:22.248993 I | raft: raft.node: 96c0a1fc885ec993 elected leader 9b634b5c21f96fa4 at term 6\n2020-05-28 12:54:22.249112 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:54:22.249143 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 12:57:42.412 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is reporting a failure: StaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is not ready: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nStaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is waiting: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-213-221.us-west-1.compute.internal is unhealthy
May 28 12:59:32.529 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 1.compute.internal\n2020-05-28 12:59:32.013800 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 12:59:32.013809 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 12:59:32.013815 I | etcdserver: heartbeat = 100ms\n2020-05-28 12:59:32.013821 I | etcdserver: election = 1000ms\n2020-05-28 12:59:32.013827 I | etcdserver: snapshot count = 100000\n2020-05-28 12:59:32.013842 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 12:59:32.205509 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 12:59:32.206697 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 12:59:32.206730 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 12:59:32.208399 I | mvcc: restore compact to 20558\n2020-05-28 12:59:32.244826 W | auth: simple token is not cryptographically signed\n2020-05-28 12:59:32.246646 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 12:59:32.247757 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 12:59:32.247796 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 12:59:32.247832 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 12:59:32.248364 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:59:32.248727 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 12:59:32.249175 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 12:59:32.249210 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 12:59:32.249235 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 12:59:32.249319 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 12:59:32.249345 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 13:04:38.736 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): ic-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.key, ca = , trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = \n2020-05-28 13:04:38.256840 I | embed: listening for metrics on https://0.0.0.0:9978\n2020-05-28 13:04:38.257340 I | rafthttp: started HTTP pipelining with peer ee856961f4548e5a\n2020-05-28 13:04:38.257370 E | rafthttp: failed to find member ee856961f4548e5a in cluster 529e02535530cb1a\n2020-05-28 13:04:38.257629 E | rafthttp: failed to find member ee856961f4548e5a in cluster 529e02535530cb1a\n2020-05-28 13:04:38.257730 I | rafthttp: started HTTP pipelining with peer 9b634b5c21f96fa4\n2020-05-28 13:04:38.257884 I | raft: raft.node: 96c0a1fc885ec993 elected leader 9b634b5c21f96fa4 at term 6\n2020-05-28 13:04:38.257929 E | rafthttp: failed to find member 9b634b5c21f96fa4 in cluster 529e02535530cb1a\n2020-05-28 13:04:38.258167 E | rafthttp: failed to find member 9b634b5c21f96fa4 in cluster 529e02535530cb1a\n2020-05-28 13:04:38.258456 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 13:04:38.258475 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 13:04:38.258504 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 13:04:38.259453 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:04:38.259773 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:04:38.260472 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 13:04:38.260541 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 13:04:38.261140 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 13:04:38.261170 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 13:06:42.412 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is reporting a failure: StaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is not ready: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nStaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is waiting: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-213-221.us-west-1.compute.internal is unhealthy
May 28 13:09:43.901 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 1.compute.internal\n2020-05-28 13:09:43.095990 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 13:09:43.096001 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 13:09:43.096008 I | etcdserver: heartbeat = 100ms\n2020-05-28 13:09:43.096016 I | etcdserver: election = 1000ms\n2020-05-28 13:09:43.096023 I | etcdserver: snapshot count = 100000\n2020-05-28 13:09:43.096042 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 13:09:43.284197 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 13:09:43.285498 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 13:09:43.285530 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 13:09:43.290600 I | mvcc: restore compact to 20558\n2020-05-28 13:09:43.335950 W | auth: simple token is not cryptographically signed\n2020-05-28 13:09:43.338371 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 13:09:43.339783 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 13:09:43.339838 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 13:09:43.339862 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 13:09:43.340697 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:09:43.341256 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:09:43.345799 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 13:09:43.345847 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 13:09:43.345901 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 13:09:43.346014 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 13:09:43.346045 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 13:14:47.070 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 1.compute.internal\n2020-05-28 13:14:46.002202 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 13:14:46.002212 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 13:14:46.002220 I | etcdserver: heartbeat = 100ms\n2020-05-28 13:14:46.002227 I | etcdserver: election = 1000ms\n2020-05-28 13:14:46.002233 I | etcdserver: snapshot count = 100000\n2020-05-28 13:14:46.002265 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 13:14:46.238578 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 13:14:46.239710 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 13:14:46.239758 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 13:14:46.241670 I | mvcc: restore compact to 20558\n2020-05-28 13:14:46.277897 W | auth: simple token is not cryptographically signed\n2020-05-28 13:14:46.280169 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 13:14:46.281238 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 13:14:46.281340 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 13:14:46.281420 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 13:14:46.282162 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:14:46.282467 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:14:46.282566 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 13:14:46.282602 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 13:14:46.282652 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 13:14:46.282731 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 13:14:46.282758 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 13:15:57.413 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is reporting a failure: StaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is not ready: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nStaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is waiting: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-213-221.us-west-1.compute.internal is unhealthy
May 28 13:19:58.270 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): dserver: snapshot count = 100000\n2020-05-28 13:19:57.030053 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 13:19:57.236894 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 13:19:57.238087 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 13:19:57.238112 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 13:19:57.239612 I | mvcc: restore compact to 20558\n2020-05-28 13:19:57.275486 W | auth: simple token is not cryptographically signed\n2020-05-28 13:19:57.277573 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 13:19:57.278621 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 13:19:57.278662 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 13:19:57.278705 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 13:19:57.279382 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:19:57.279737 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:19:57.279999 I | embed: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.key, ca = , trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = \n2020-05-28 13:19:57.280192 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 13:19:57.280228 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 13:19:57.280341 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 13:19:57.280363 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 13:24:57.416 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is reporting a failure: StaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is not ready: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nStaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is waiting: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-213-221.us-west-1.compute.internal is unhealthy
May 28 13:25:10.477 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 1.compute.internal\n2020-05-28 13:25:09.061332 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 13:25:09.061351 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 13:25:09.061359 I | etcdserver: heartbeat = 100ms\n2020-05-28 13:25:09.061366 I | etcdserver: election = 1000ms\n2020-05-28 13:25:09.061373 I | etcdserver: snapshot count = 100000\n2020-05-28 13:25:09.061420 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 13:25:09.261808 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 13:25:09.262730 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 13:25:09.262754 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 13:25:09.264323 I | mvcc: restore compact to 20558\n2020-05-28 13:25:09.308079 W | auth: simple token is not cryptographically signed\n2020-05-28 13:25:09.309932 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 13:25:09.310526 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 13:25:09.310568 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 13:25:09.310633 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 13:25:09.311211 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:25:09.311600 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:25:09.311949 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 13:25:09.311983 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 13:25:09.311997 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 13:25:09.312085 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 13:25:09.312112 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 13:30:14.626 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error):  URLs = https://10.0.213.221:2379\n2020-05-28 13:30:14.215860 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 13:30:14.217084 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 13:30:14.217110 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 13:30:14.218648 I | mvcc: restore compact to 20558\n2020-05-28 13:30:14.257468 W | auth: simple token is not cryptographically signed\n2020-05-28 13:30:14.259240 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 13:30:14.260265 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 13:30:14.260303 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 13:30:14.260362 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 13:30:14.260983 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:30:14.261348 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:30:14.261435 I | embed: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.key, ca = , trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = \n2020-05-28 13:30:14.261731 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 13:30:14.261767 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 13:30:14.261885 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 13:30:14.261914 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n2020-05-28 13:30:14.261923 I | rafthttp: started HTTP pipelining with peer 9b634b5c21f96fa4\n
May 28 13:33:57.412 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is reporting a failure: StaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is not ready: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nStaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is waiting: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-213-221.us-west-1.compute.internal is unhealthy
May 28 13:35:26.851 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): g for client requests on 0.0.0.0:2379\n2020-05-28 13:35:25.973557 I | etcdserver: name = ip-10-0-213-221.us-west-1.compute.internal\n2020-05-28 13:35:25.973584 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 13:35:25.973592 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 13:35:25.973599 I | etcdserver: heartbeat = 100ms\n2020-05-28 13:35:25.973605 I | etcdserver: election = 1000ms\n2020-05-28 13:35:25.973612 I | etcdserver: snapshot count = 100000\n2020-05-28 13:35:25.973624 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 13:35:26.179647 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 13:35:26.180824 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 13:35:26.180850 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 13:35:26.182300 I | mvcc: restore compact to 20558\n2020-05-28 13:35:26.221890 W | auth: simple token is not cryptographically signed\n2020-05-28 13:35:26.223733 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 13:35:26.224350 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 13:35:26.224412 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 13:35:26.224464 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 13:35:26.225128 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:35:26.225637 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:35:26.226166 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 13:35:26.226283 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 13:35:26.226424 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 13:35:26.226448 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 13:40:39.079 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): ic-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving/etcd-serving-ip-10-0-213-221.us-west-1.compute.internal.key, ca = , trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-serving-ca/ca-bundle.crt, client-cert-auth = true, crl-file = \n2020-05-28 13:40:38.287946 I | rafthttp: started HTTP pipelining with peer ee856961f4548e5a\n2020-05-28 13:40:38.288025 E | rafthttp: failed to find member ee856961f4548e5a in cluster 529e02535530cb1a\n2020-05-28 13:40:38.288085 I | rafthttp: started HTTP pipelining with peer 9b634b5c21f96fa4\n2020-05-28 13:40:38.288112 E | rafthttp: failed to find member 9b634b5c21f96fa4 in cluster 529e02535530cb1a\n2020-05-28 13:40:38.288180 I | embed: listening for metrics on https://0.0.0.0:9978\n2020-05-28 13:40:38.288378 E | rafthttp: failed to find member ee856961f4548e5a in cluster 529e02535530cb1a\n2020-05-28 13:40:38.288505 E | rafthttp: failed to find member 9b634b5c21f96fa4 in cluster 529e02535530cb1a\n2020-05-28 13:40:38.288726 I | raft: raft.node: 96c0a1fc885ec993 elected leader 9b634b5c21f96fa4 at term 6\n2020-05-28 13:40:38.289097 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 13:40:38.289128 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 13:40:38.289162 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 13:40:38.289976 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:40:38.290654 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:40:38.290734 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 13:40:38.290770 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 13:40:38.290881 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 13:40:38.290906 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 13:42:42.414 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is reporting a failure: StaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is not ready: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nStaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is waiting: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-213-221.us-west-1.compute.internal is unhealthy
May 28 13:45:55.307 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 1.compute.internal\n2020-05-28 13:45:54.132089 I | etcdserver: data dir = /var/lib/etcd\n2020-05-28 13:45:54.132095 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 13:45:54.132100 I | etcdserver: heartbeat = 100ms\n2020-05-28 13:45:54.132104 I | etcdserver: election = 1000ms\n2020-05-28 13:45:54.132109 I | etcdserver: snapshot count = 100000\n2020-05-28 13:45:54.132120 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 13:45:54.324615 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 13:45:54.325867 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 13:45:54.325893 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 13:45:54.327419 I | mvcc: restore compact to 20558\n2020-05-28 13:45:54.368867 W | auth: simple token is not cryptographically signed\n2020-05-28 13:45:54.370883 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 13:45:54.371521 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 13:45:54.371557 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 13:45:54.371593 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 13:45:54.372227 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:45:54.372653 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:45:54.372988 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 13:45:54.373018 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 13:45:54.373033 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 13:45:54.373148 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 13:45:54.373174 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 13:51:01.507 E ns/openshift-etcd pod/etcd-ip-10-0-213-221.us-west-1.compute.internal node/ip-10-0-213-221.us-west-1.compute.internal container/etcd container exited with code 1 (Error): 093 I | etcdserver: member dir = /var/lib/etcd/member\n2020-05-28 13:51:01.004099 I | etcdserver: heartbeat = 100ms\n2020-05-28 13:51:01.004105 I | etcdserver: election = 1000ms\n2020-05-28 13:51:01.004112 I | etcdserver: snapshot count = 100000\n2020-05-28 13:51:01.004125 I | etcdserver: advertise client URLs = https://10.0.213.221:2379\n2020-05-28 13:51:01.209076 I | etcdserver: restarting member 96c0a1fc885ec993 in cluster 529e02535530cb1a at commit index 27805\n2020-05-28 13:51:01.210230 I | raft: 96c0a1fc885ec993 became follower at term 6\n2020-05-28 13:51:01.210253 I | raft: newRaft 96c0a1fc885ec993 [peers: [], term: 6, commit: 27805, applied: 0, lastindex: 27805, lastterm: 6]\n2020-05-28 13:51:01.211853 I | mvcc: restore compact to 20558\n2020-05-28 13:51:01.259223 W | auth: simple token is not cryptographically signed\n2020-05-28 13:51:01.261450 I | etcdserver: starting server... [version: 3.3.18, cluster version: to_be_decided]\n2020-05-28 13:51:01.262335 I | etcdserver/membership: added member 8d487b44db3c1ca3 [https://10.0.21.135:2380] to cluster 529e02535530cb1a\n2020-05-28 13:51:01.262366 I | rafthttp: starting peer 8d487b44db3c1ca3...\n2020-05-28 13:51:01.262521 I | rafthttp: started HTTP pipelining with peer 8d487b44db3c1ca3\n2020-05-28 13:51:01.263185 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:51:01.263716 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (writer)\n2020-05-28 13:51:01.276035 I | rafthttp: started peer 8d487b44db3c1ca3\n2020-05-28 13:51:01.276078 I | rafthttp: added peer 8d487b44db3c1ca3\n2020-05-28 13:51:01.276135 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream Message reader)\n2020-05-28 13:51:01.276206 I | rafthttp: started streaming with peer 8d487b44db3c1ca3 (stream MsgApp v2 reader)\n2020-05-28 13:51:01.276459 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-05-28 13:51:01.276529 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.3.18 is lower than determined cluster version: 3.4).\n
May 28 13:51:25.932 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator etcd is reporting a failure: StaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is not ready: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nStaticPodsDegraded: pod/etcd-ip-10-0-213-221.us-west-1.compute.internal container "etcd" is waiting: CrashLoopBackOff: back-off 5m0s restarting failed container=etcd pod=etcd-ip-10-0-213-221.us-west-1.compute.internal_openshift-etcd(0391c0d39a1d2f382888da2156f7644b)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-213-221.us-west-1.compute.internal is unhealthy