ResultSUCCESS
Tests 3 failed / 22 succeeded
Started2020-04-08 01:41
Elapsed1h24m
Work namespaceci-op-yzvgscmf
Refs openshift-4.5:fe90dcbe
44:8b80929a
podf2055708-7939-11ea-9202-0a58ac101cb7
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 38m15s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 1s of 33m25s (0%):

Apr 08 02:32:36.972 E ns/e2e-k8s-service-lb-available-9345 svc/service-test Service stopped responding to GET requests on reused connections
Apr 08 02:32:37.007 I ns/e2e-k8s-service-lb-available-9345 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1586314543.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 36m44s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 4s of 36m44s (0%):

Apr 08 02:27:58.991 E kube-apiserver Kube API started failing: Get https://api.ci-op-yzvgscmf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 3.231.122.136:6443: connect: connection refused
Apr 08 02:27:59.939 E kube-apiserver Kube API is not responding to GET requests
Apr 08 02:27:59.970 I kube-apiserver Kube API started responding to GET requests
Apr 08 02:49:13.972 E kube-apiserver Kube API started failing: Get https://api.ci-op-yzvgscmf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 3.231.122.136:6443: connect: connection refused
Apr 08 02:49:14.939 E kube-apiserver Kube API is not responding to GET requests
Apr 08 02:49:15.272 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1586314543.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 38m16s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
187 error level events were detected during this test run:

Apr 08 02:17:32.226 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-233.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T02:17:30.202Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T02:17:30.210Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T02:17:30.212Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T02:17:30.213Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T02:17:30.213Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T02:17:30.213Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T02:17:30.213Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T02:17:30.213Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T02:17:30.213Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T02:17:30.213Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T02:17:30.214Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T02:17:30.214Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T02:17:30.214Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T02:17:30.214Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T02:17:30.218Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T02:17:30.218Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 02:17:38.072 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-68.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T02:16:36.187Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T02:16:36.193Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T02:16:36.193Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T02:16:36.194Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T02:16:36.194Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T02:16:36.194Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T02:16:36.194Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T02:16:36.194Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T02:16:36.194Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T02:16:36.194Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T02:16:36.194Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T02:16:36.194Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T02:16:36.194Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T02:16:36.194Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T02:16:36.195Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T02:16:36.195Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 02:17:38.072 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-68.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-08T02:16:38.234780707Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-08T02:16:38.236392069Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-08T02:16:43.332403522Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-08T02:16:43.332531583Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Apr 08 02:17:48.355 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-68.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T02:17:47.149Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T02:17:47.156Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T02:17:47.156Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T02:17:47.157Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T02:17:47.157Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T02:17:47.157Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T02:17:47.157Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T02:17:47.157Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T02:17:47.157Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T02:17:47.157Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T02:17:47.158Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T02:17:47.158Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T02:17:47.158Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T02:17:47.159Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T02:17:47.159Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=info ts=2020-04-08T02:17:47.159Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=error ts=2020-04-08
Apr 08 02:20:54.614 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-79c5c4d49c" has successfully progressed.
Apr 08 02:22:27.453 E ns/openshift-machine-api pod/machine-api-operator-79f69c8cb9-h6lvm node/ip-10-0-135-223.ec2.internal container/machine-api-operator container exited with code 2 (Error): 
Apr 08 02:23:02.852 E kube-apiserver Kube API started failing: Get https://api.ci-op-yzvgscmf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 08 02:23:58.424 E kube-apiserver failed contacting the API: Get https://api.ci-op-yzvgscmf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=25057&timeout=6m23s&timeoutSeconds=383&watch=true: dial tcp 54.163.206.188:6443: connect: connection refused
Apr 08 02:24:24.544 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: ip-10-0-153-196.ec2.internal members are unhealthy,  members are unknown
Apr 08 02:24:39.902 E ns/openshift-machine-api pod/machine-api-controllers-57dcbd44b7-l27dm node/ip-10-0-135-223.ec2.internal container/machineset-controller container exited with code 1 (Error): 
Apr 08 02:24:54.631 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Apr 08 02:26:36.138 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-6f6489f9b9-rgpn6 node/ip-10-0-134-80.ec2.internal container/kube-storage-version-migrator-operator container exited with code 255 (Error): ): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: {"conditions":[{"type":"Degraded","status":"False","lastTransitionTime":"2020-04-08T02:00:29Z","reason":"AsExpected"},{"type":"Progressing","status":"False","lastTransitionTime":"2020-04-08T02:00:30Z","reason":"AsExpected"},{"type":"Available","status":"False","lastTransitionTime":"2020-04-08T02:00:29Z","reason":"_NoMigratorPod","message":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-04-08T02:00:28Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-04-08-014149"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0408 02:10:55.226004       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"b2ca4107-312f-493c-ba91-bba60ec4fe03", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0408 02:26:35.058074       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0408 02:26:35.058125       1 leaderelection.go:66] leaderelection lost\n
Apr 08 02:26:47.035 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator is not available MinimumReplicasUnavailable: Deployment does not have minimum availability.
Apr 08 02:28:09.491 E ns/openshift-cluster-machine-approver pod/machine-approver-55764c6bb7-4twwv node/ip-10-0-134-80.ec2.internal container/machine-approver-controller container exited with code 2 (Error): 0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=15880&timeoutSeconds=317&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nI0408 02:14:04.581551       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0408 02:14:04.582483       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=19475&timeoutSeconds=462&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 02:14:05.583151       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=19475&timeoutSeconds=463&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 02:14:11.799304       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io)\nE0408 02:27:58.278174       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=26073&timeoutSeconds=564&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 02:27:59.279329       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=26073&timeoutSeconds=389&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
Apr 08 02:28:22.595 E ns/openshift-service-ca-operator pod/service-ca-operator-66b5f5df4f-h2n7t node/ip-10-0-134-80.ec2.internal container/operator container exited with code 1 (Error): 
Apr 08 02:28:22.716 E ns/openshift-kube-storage-version-migrator pod/migrator-778b8745cd-ct27c node/ip-10-0-154-233.ec2.internal container/migrator container exited with code 2 (Error): 
Apr 08 02:28:24.436 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: ip-10-0-135-223.ec2.internal members are unhealthy,  members are unknown
Apr 08 02:28:26.737 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-bb6d5b9cf-jzrj8 node/ip-10-0-154-233.ec2.internal container/operator container exited with code 255 (Error): +1041.209355121\nI0408 02:28:16.279281       1 operator.go:147] Finished syncing operator at 46.199637ms\nI0408 02:28:16.350648       1 operator.go:145] Starting syncing operator at 2020-04-08 02:28:16.350637906 +0000 UTC m=+1041.326918927\nI0408 02:28:16.439202       1 operator.go:147] Finished syncing operator at 88.550365ms\nI0408 02:28:20.374067       1 operator.go:145] Starting syncing operator at 2020-04-08 02:28:20.374057095 +0000 UTC m=+1045.350337997\nI0408 02:28:20.399674       1 operator.go:147] Finished syncing operator at 25.609775ms\nI0408 02:28:20.409099       1 operator.go:145] Starting syncing operator at 2020-04-08 02:28:20.409091678 +0000 UTC m=+1045.385372592\nI0408 02:28:20.442007       1 operator.go:147] Finished syncing operator at 32.907499ms\nI0408 02:28:20.442060       1 operator.go:145] Starting syncing operator at 2020-04-08 02:28:20.44205423 +0000 UTC m=+1045.418335187\nI0408 02:28:20.476326       1 operator.go:147] Finished syncing operator at 34.262021ms\nI0408 02:28:20.476377       1 operator.go:145] Starting syncing operator at 2020-04-08 02:28:20.476370353 +0000 UTC m=+1045.452651338\nI0408 02:28:20.824108       1 operator.go:147] Finished syncing operator at 347.727644ms\nI0408 02:28:21.052711       1 operator.go:145] Starting syncing operator at 2020-04-08 02:28:21.052696712 +0000 UTC m=+1046.028977813\nI0408 02:28:21.396648       1 operator.go:147] Finished syncing operator at 343.941485ms\nI0408 02:28:21.396698       1 operator.go:145] Starting syncing operator at 2020-04-08 02:28:21.396693385 +0000 UTC m=+1046.372974294\nI0408 02:28:22.071769       1 operator.go:147] Finished syncing operator at 675.063025ms\nI0408 02:28:25.876171       1 operator.go:145] Starting syncing operator at 2020-04-08 02:28:25.876157629 +0000 UTC m=+1050.852438664\nI0408 02:28:25.927683       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 02:28:25.928117       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0408 02:28:25.929181       1 builder.go:243] stopped\n
Apr 08 02:28:50.483 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-141-227.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 02:16:31 Watching directory: "/etc/alertmanager/config"\n
Apr 08 02:28:50.483 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-141-227.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 02:16:31 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:16:31 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 02:16:31 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 02:16:31 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 02:16:31 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 02:16:31 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:16:31 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0408 02:16:31.567189       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 02:16:31 http.go:107: HTTPS: listening on [::]:9095\n
Apr 08 02:28:50.790 E ns/openshift-monitoring pod/kube-state-metrics-77f869d69b-cxwpq node/ip-10-0-154-233.ec2.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 08 02:28:52.796 E ns/openshift-monitoring pod/openshift-state-metrics-b9d96b794-znq6l node/ip-10-0-154-233.ec2.internal container/openshift-state-metrics container exited with code 2 (Error): 
Apr 08 02:28:53.448 E ns/openshift-monitoring pod/node-exporter-wffws node/ip-10-0-135-223.ec2.internal container/node-exporter container exited with code 143 (Error): -08T02:06:32Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:32Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 02:29:01.881 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-233.ec2.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/04/08 02:17:30 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 08 02:29:01.881 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-233.ec2.internal container/prometheus-proxy container exited with code 2 (Error): 2020/04/08 02:17:31 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 02:17:31 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 02:17:31 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 02:17:31 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 02:17:31 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 02:17:31 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 02:17:31 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 02:17:31 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0408 02:17:31.044779       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 02:17:31 http.go:107: HTTPS: listening on [::]:9091\n
Apr 08 02:29:01.881 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-233.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-08T02:17:30.373677936Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-08T02:17:30.375751509Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-08T02:17:35.582331905Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-08T02:17:35.582451031Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Apr 08 02:29:08.347 E ns/openshift-controller-manager pod/controller-manager-zxqgn node/ip-10-0-153-196.ec2.internal container/controller-manager container exited with code 137 (Error): I0408 02:06:44.484671       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0408 02:06:44.487017       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-yzvgscmf/stable-initial@sha256:baf34611b723ba5e9b3ead8872fed2c8af700156096054d720d42a057f5f24be"\nI0408 02:06:44.487038       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-yzvgscmf/stable-initial@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0408 02:06:44.487109       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0408 02:06:44.487228       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 08 02:29:08.514 E ns/openshift-monitoring pod/node-exporter-xk8ld node/ip-10-0-153-196.ec2.internal container/node-exporter container exited with code 143 (Error): ode_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:02Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\ntime="2020-04-08T02:16:55Z" level=error msg="ERROR: netclass collector failed after 0.036540s: could not get net class info: error obtaining net class info: could not access file duplex: no such device" source="collector.go:132"\n
Apr 08 02:29:10.929 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-233.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T02:29:08.059Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T02:29:08.064Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T02:29:08.065Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T02:29:08.066Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T02:29:08.066Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T02:29:08.066Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T02:29:08.066Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T02:29:08.066Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T02:29:08.066Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T02:29:08.066Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T02:29:08.066Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T02:29:08.066Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T02:29:08.066Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T02:29:08.066Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T02:29:08.067Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T02:29:08.067Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 02:29:11.392 E ns/openshift-monitoring pod/prometheus-adapter-b8f5b89d-n26fn node/ip-10-0-129-68.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0408 02:16:27.347819       1 adapter.go:93] successfully using in-cluster auth\nI0408 02:16:28.077358       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 08 02:29:13.057 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-154-233.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 02:16:48 Watching directory: "/etc/alertmanager/config"\n
Apr 08 02:29:13.057 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-154-233.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 02:16:48 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:16:48 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 02:16:48 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 02:16:48 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 02:16:48 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 02:16:48 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:16:48 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 02:16:48 http.go:107: HTTPS: listening on [::]:9095\nI0408 02:16:48.419186       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 02:21:51 server.go:3056: http: TLS handshake error from 10.128.2.5:47438: read tcp 10.131.0.22:9095->10.128.2.5:47438: read: connection reset by peer\n
Apr 08 02:29:14.839 E ns/openshift-monitoring pod/node-exporter-g66vv node/ip-10-0-129-68.ec2.internal container/node-exporter container exited with code 143 (Error): -08T02:09:56Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:09:56Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 02:29:29.083 E ns/openshift-monitoring pod/node-exporter-7jhc6 node/ip-10-0-134-80.ec2.internal container/node-exporter container exited with code 143 (Error): -08T02:06:01Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:06:01Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 02:29:30.529 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-68.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 02:16:53 Watching directory: "/etc/alertmanager/config"\n
Apr 08 02:29:30.529 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-68.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 02:16:54 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:16:54 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 02:16:54 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 02:16:54 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 02:16:54 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 02:16:54 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:16:54 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 02:16:54 http.go:107: HTTPS: listening on [::]:9095\nI0408 02:16:54.209473       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 02:29:44.157 E ns/openshift-monitoring pod/node-exporter-c4r57 node/ip-10-0-154-233.ec2.internal container/node-exporter container exited with code 143 (Error): -08T02:09:49Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:09:49Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 02:29:50.849 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-227.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T02:29:45.482Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T02:29:45.489Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T02:29:45.489Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T02:29:45.490Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T02:29:45.490Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T02:29:45.491Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T02:29:45.491Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T02:29:45.492Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T02:29:45.492Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 02:29:58.210 E ns/openshift-marketplace pod/certified-operators-6c9ddbd9cf-h5szh node/ip-10-0-154-233.ec2.internal container/certified-operators container exited with code 2 (Error): 
Apr 08 02:30:03.222 E ns/openshift-marketplace pod/community-operators-df9d4d6c4-xxz9s node/ip-10-0-154-233.ec2.internal container/community-operators container exited with code 2 (Error): 
Apr 08 02:30:11.696 E ns/openshift-console pod/console-7f8cb59f5d-wqq27 node/ip-10-0-153-196.ec2.internal container/console container exited with code 2 (Error): 2020-04-08T02:16:33Z cmd/main: cookies are secure!\n2020-04-08T02:16:33Z cmd/main: Binding to [::]:8443...\n2020-04-08T02:16:33Z cmd/main: using TLS\n
Apr 08 02:30:19.273 E ns/openshift-console pod/console-7f8cb59f5d-x5g66 node/ip-10-0-134-80.ec2.internal container/console container exited with code 2 (Error): 2020-04-08T02:16:23Z cmd/main: cookies are secure!\n2020-04-08T02:16:23Z cmd/main: Binding to [::]:8443...\n2020-04-08T02:16:23Z cmd/main: using TLS\n
Apr 08 02:31:32.275 E ns/openshift-sdn pod/sdn-controller-2mspc node/ip-10-0-135-223.ec2.internal container/sdn-controller container exited with code 2 (Error): :114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 02:12:19.851757       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 02:12:19.852060       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 02:12:19.852367       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 02:12:19.852535       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 02:17:28.327814       1 vnids.go:115] Allocated netid 4608891 for namespace "e2e-k8s-sig-apps-job-upgrade-9931"\nI0408 02:17:28.359255       1 vnids.go:115] Allocated netid 11638245 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-8807"\nI0408 02:17:28.369824       1 vnids.go:115] Allocated netid 4167903 for namespace "e2e-control-plane-available-7451"\nI0408 02:17:28.394986       1 vnids.go:115] Allocated netid 13591938 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-6256"\nI0408 02:17:28.413680       1 vnids.go:115] Allocated netid 13360006 for namespace "e2e-k8s-sig-apps-deployment-upgrade-415"\nI0408 02:17:28.444529       1 vnids.go:115] Allocated netid 9501723 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-9774"\nI0408 02:17:28.494633       1 vnids.go:115] Allocated netid 7618841 for namespace "e2e-frontend-ingress-available-3070"\nI0408 02:17:28.511232       1 vnids.go:115] Allocated netid 7755772 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-3164"\nI0408 02:17:28.524058       1 vnids.go:115] Allocated netid 10724095 for namespace "e2e-k8s-service-lb-available-9345"\nE0408 02:23:58.377979       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://api-int.ci-op-yzvgscmf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=22687&timeout=5m47s&timeoutSeconds=347&watch=true: dial tcp 10.0.140.25:6443: connect: connection refused\n
Apr 08 02:31:35.844 E ns/openshift-sdn pod/sdn-wqc9s node/ip-10-0-129-68.ec2.internal container/sdn container exited with code 255 (Error): :45.546054    2328 pod.go:503] CNI_ADD openshift-image-registry/image-registry-54d75dd9cc-r9pww got IP 10.129.2.26, ofport 27\nI0408 02:29:03.497712    2328 pod.go:503] CNI_ADD openshift-marketplace/certified-operators-7d8bfd8d4-xp2hk got IP 10.129.2.27, ofport 28\nI0408 02:29:03.702880    2328 pod.go:503] CNI_ADD openshift-marketplace/redhat-operators-6c6496c64f-btj9h got IP 10.129.2.28, ofport 29\nI0408 02:29:03.800129    2328 pod.go:503] CNI_ADD openshift-marketplace/redhat-operators-747b4cb76-npstt got IP 10.129.2.29, ofport 30\nI0408 02:29:03.891750    2328 pod.go:503] CNI_ADD openshift-marketplace/redhat-marketplace-f46ddc6b8-sqmft got IP 10.129.2.30, ofport 31\nI0408 02:29:03.980829    2328 pod.go:503] CNI_ADD openshift-monitoring/grafana-77b8b55848-jnnl9 got IP 10.129.2.31, ofport 32\nI0408 02:29:04.079193    2328 pod.go:503] CNI_ADD openshift-marketplace/community-operators-6bcf7594dc-wlz4s got IP 10.129.2.32, ofport 33\nI0408 02:29:05.377641    2328 pod.go:539] CNI_DEL openshift-marketplace/redhat-operators-6c6496c64f-btj9h\nI0408 02:29:05.552237    2328 pod.go:539] CNI_DEL openshift-image-registry/node-ca-9vz2d\nI0408 02:29:07.836880    2328 pod.go:539] CNI_DEL openshift-monitoring/prometheus-adapter-b8f5b89d-n26fn\nI0408 02:29:10.954392    2328 pod.go:539] CNI_DEL openshift-monitoring/thanos-querier-7d8b787bc8-mrvlm\nI0408 02:29:11.660402    2328 pod.go:503] CNI_ADD openshift-monitoring/thanos-querier-6c48fb99cd-dm7t7 got IP 10.129.2.33, ofport 34\nI0408 02:29:13.945658    2328 pod.go:503] CNI_ADD openshift-image-registry/node-ca-jl4n8 got IP 10.129.2.34, ofport 35\nI0408 02:29:18.281814    2328 pod.go:539] CNI_DEL openshift-monitoring/prometheus-k8s-0\nI0408 02:29:29.727821    2328 pod.go:539] CNI_DEL openshift-monitoring/alertmanager-main-0\nI0408 02:29:32.256645    2328 pod.go:503] CNI_ADD openshift-monitoring/alertmanager-main-0 got IP 10.129.2.35, ofport 36\nF0408 02:31:35.659759    2328 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 02:32:00.142 E ns/openshift-multus pod/multus-8t85p node/ip-10-0-141-227.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 02:32:00.946 E ns/openshift-multus pod/multus-admission-controller-l577t node/ip-10-0-153-196.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 02:32:00.987 E ns/openshift-sdn pod/sdn-controller-wjhzs node/ip-10-0-153-196.ec2.internal container/sdn-controller container exited with code 2 (Error): I0408 01:57:00.452559       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0408 02:01:50.136580       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\nE0408 02:03:56.627833       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-yzvgscmf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\nE0408 02:04:48.460451       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-yzvgscmf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.148.218:6443: i/o timeout\n
Apr 08 02:32:13.676 E ns/openshift-sdn pod/sdn-controller-g62vb node/ip-10-0-134-80.ec2.internal container/sdn-controller container exited with code 2 (Error): I0408 01:57:01.627786       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0408 02:03:56.612506       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-yzvgscmf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 08 02:32:41.160 E ns/openshift-multus pod/multus-hgls2 node/ip-10-0-153-196.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 02:32:47.677 E ns/openshift-sdn pod/sdn-5n997 node/ip-10-0-135-223.ec2.internal container/sdn container exited with code 255 (Error): I0408 02:31:37.674410   99121 node.go:146] Initializing SDN node "ip-10-0-135-223.ec2.internal" (10.0.135.223) of type "redhat/openshift-ovs-networkpolicy"\nI0408 02:31:37.679428   99121 cmd.go:151] Starting node networking (unknown)\nI0408 02:31:37.824983   99121 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 02:31:37.945923   99121 proxy.go:103] Using unidling+iptables Proxier.\nI0408 02:31:37.946270   99121 proxy.go:129] Tearing down userspace rules.\nI0408 02:31:37.963495   99121 networkpolicy.go:330] SyncVNIDRules: 9 unused VNIDs\nI0408 02:31:38.229794   99121 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 02:31:38.242802   99121 config.go:313] Starting service config controller\nI0408 02:31:38.243229   99121 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 02:31:38.242875   99121 config.go:131] Starting endpoints config controller\nI0408 02:31:38.243389   99121 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 02:31:38.243140   99121 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 02:31:38.343497   99121 shared_informer.go:204] Caches are synced for service config \nI0408 02:31:38.343767   99121 shared_informer.go:204] Caches are synced for endpoints config \nF0408 02:32:46.768910   99121 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 02:32:49.225 E ns/openshift-multus pod/multus-admission-controller-s7pgf node/ip-10-0-134-80.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 02:33:07.306 E ns/openshift-sdn pod/sdn-pt47x node/ip-10-0-153-196.ec2.internal container/sdn container exited with code 255 (Error): I0408 02:32:13.339225   97508 node.go:146] Initializing SDN node "ip-10-0-153-196.ec2.internal" (10.0.153.196) of type "redhat/openshift-ovs-networkpolicy"\nI0408 02:32:13.344290   97508 cmd.go:151] Starting node networking (unknown)\nI0408 02:32:13.468457   97508 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 02:32:13.599499   97508 proxy.go:103] Using unidling+iptables Proxier.\nI0408 02:32:13.599863   97508 proxy.go:129] Tearing down userspace rules.\nI0408 02:32:13.611970   97508 networkpolicy.go:330] SyncVNIDRules: 6 unused VNIDs\nI0408 02:32:13.804066   97508 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 02:32:13.813223   97508 config.go:313] Starting service config controller\nI0408 02:32:13.813257   97508 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 02:32:13.813273   97508 config.go:131] Starting endpoints config controller\nI0408 02:32:13.813296   97508 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 02:32:13.813478   97508 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 02:32:13.913472   97508 shared_informer.go:204] Caches are synced for service config \nI0408 02:32:13.913472   97508 shared_informer.go:204] Caches are synced for endpoints config \nI0408 02:32:14.611791   97508 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-px8s8 got IP 10.129.0.75, ofport 76\nF0408 02:33:06.418070   97508 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 02:33:28.351 E ns/openshift-sdn pod/sdn-tgkbv node/ip-10-0-141-227.ec2.internal container/sdn container exited with code 255 (Error): I0408 02:32:40.109816   54450 node.go:146] Initializing SDN node "ip-10-0-141-227.ec2.internal" (10.0.141.227) of type "redhat/openshift-ovs-networkpolicy"\nI0408 02:32:40.114217   54450 cmd.go:151] Starting node networking (unknown)\nI0408 02:32:40.236453   54450 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 02:32:40.339095   54450 proxy.go:103] Using unidling+iptables Proxier.\nI0408 02:32:40.339384   54450 proxy.go:129] Tearing down userspace rules.\nI0408 02:32:40.345908   54450 networkpolicy.go:330] SyncVNIDRules: 1 unused VNIDs\nI0408 02:32:40.516463   54450 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 02:32:40.525031   54450 config.go:313] Starting service config controller\nI0408 02:32:40.525066   54450 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 02:32:40.525053   54450 config.go:131] Starting endpoints config controller\nI0408 02:32:40.525086   54450 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 02:32:40.525229   54450 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 02:32:40.625259   54450 shared_informer.go:204] Caches are synced for endpoints config \nI0408 02:32:40.625363   54450 shared_informer.go:204] Caches are synced for service config \nF0408 02:33:27.279786   54450 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 02:33:29.698 E ns/openshift-multus pod/multus-wbkhw node/ip-10-0-154-233.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 02:33:34.529 E ns/openshift-multus pod/multus-admission-controller-jxn6z node/ip-10-0-135-223.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 02:34:15.157 E ns/openshift-multus pod/multus-55d72 node/ip-10-0-129-68.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 02:35:03.713 E ns/openshift-multus pod/multus-xrwp7 node/ip-10-0-134-80.ec2.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 02:36:39.085 E ns/openshift-machine-config-operator pod/machine-config-operator-647d4b4db5-6vh7w node/ip-10-0-134-80.ec2.internal container/machine-config-operator container exited with code 2 (Error): , Name:"machine-config", UID:"802ee1eb-1bf9-4db9-8353-c411437c8be0", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-04-08-014149}]\nE0408 02:00:24.556873       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0408 02:00:24.557102       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0408 02:00:25.602388       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0408 02:00:29.430564       1 sync.go:61] [init mode] synced RenderConfig in 5.465900196s\nI0408 02:00:29.964289       1 sync.go:61] [init mode] synced MachineConfigPools in 405.086775ms\nI0408 02:01:10.666740       1 sync.go:61] [init mode] synced MachineConfigDaemon in 40.651118585s\nI0408 02:01:25.767650       1 sync.go:61] [init mode] synced MachineConfigController in 15.097921216s\nI0408 02:01:35.874398       1 sync.go:61] [init mode] synced MachineConfigServer in 10.103769021s\nI0408 02:01:51.519402       1 sync.go:61] [init mode] synced RequiredPools in 15.642126106s\nI0408 02:01:52.698224       1 sync.go:92] Initialization complete\nE0408 02:03:56.651951       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Apr 08 02:38:52.390 E ns/openshift-machine-config-operator pod/machine-config-daemon-dfp7s node/ip-10-0-154-233.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 02:38:56.042 E ns/openshift-machine-config-operator pod/machine-config-daemon-2cml9 node/ip-10-0-141-227.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 02:39:00.533 E ns/openshift-machine-config-operator pod/machine-config-daemon-jpzl8 node/ip-10-0-134-80.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 02:39:12.608 E ns/openshift-machine-config-operator pod/machine-config-daemon-crtgv node/ip-10-0-153-196.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 02:39:31.647 E ns/openshift-machine-config-operator pod/machine-config-controller-658444758f-ttd9r node/ip-10-0-134-80.ec2.internal container/machine-config-controller container exited with code 2 (Error): ted with rendered-master-2714f4fd6aebd04a817c3e8a07cb4b25\nE0408 02:03:56.636001       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config-controller: unexpected EOF\nI0408 02:10:28.100784       1 node_controller.go:452] Pool worker: node ip-10-0-154-233.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-df50c4f82666afd8a30cde8622c9b64a\nI0408 02:10:28.100813       1 node_controller.go:452] Pool worker: node ip-10-0-154-233.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-df50c4f82666afd8a30cde8622c9b64a\nI0408 02:10:28.100823       1 node_controller.go:452] Pool worker: node ip-10-0-154-233.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0408 02:10:48.844556       1 node_controller.go:452] Pool worker: node ip-10-0-129-68.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-df50c4f82666afd8a30cde8622c9b64a\nI0408 02:10:48.844597       1 node_controller.go:452] Pool worker: node ip-10-0-129-68.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-df50c4f82666afd8a30cde8622c9b64a\nI0408 02:10:48.844611       1 node_controller.go:452] Pool worker: node ip-10-0-129-68.ec2.internal changed machineconfiguration.openshift.io/state = Done\nI0408 02:11:02.139488       1 node_controller.go:452] Pool worker: node ip-10-0-141-227.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-df50c4f82666afd8a30cde8622c9b64a\nI0408 02:11:02.139519       1 node_controller.go:452] Pool worker: node ip-10-0-141-227.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-df50c4f82666afd8a30cde8622c9b64a\nI0408 02:11:02.139529       1 node_controller.go:452] Pool worker: node ip-10-0-141-227.ec2.internal changed machineconfiguration.openshift.io/state = Done\n
Apr 08 02:41:30.101 E ns/openshift-machine-config-operator pod/machine-config-server-5vqlz node/ip-10-0-153-196.ec2.internal container/machine-config-server container exited with code 2 (Error): I0408 02:01:29.395987       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0408 02:01:29.396806       1 api.go:51] Launching server on :22624\nI0408 02:01:29.396858       1 api.go:51] Launching server on :22623\nI0408 02:06:52.512026       1 api.go:97] Pool worker requested by 10.0.148.218:4271\nI0408 02:06:58.987366       1 api.go:97] Pool worker requested by 10.0.148.218:25014\n
Apr 08 02:41:38.046 E ns/openshift-machine-config-operator pod/machine-config-server-7qhpt node/ip-10-0-134-80.ec2.internal container/machine-config-server container exited with code 2 (Error): I0408 02:01:34.763405       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0408 02:01:34.764702       1 api.go:51] Launching server on :22623\nI0408 02:01:34.764754       1 api.go:51] Launching server on :22624\nI0408 02:06:45.020933       1 api.go:97] Pool worker requested by 10.0.140.25:36112\n
Apr 08 02:41:41.762 E ns/openshift-monitoring pod/telemeter-client-844b88577f-z85hw node/ip-10-0-154-233.ec2.internal container/telemeter-client container exited with code 2 (Error): 
Apr 08 02:41:41.762 E ns/openshift-monitoring pod/telemeter-client-844b88577f-z85hw node/ip-10-0-154-233.ec2.internal container/reload container exited with code 2 (Error): 
Apr 08 02:41:41.802 E ns/openshift-monitoring pod/thanos-querier-6c48fb99cd-9ljt7 node/ip-10-0-154-233.ec2.internal container/oauth-proxy container exited with code 2 (Error): 2020/04/08 02:29:02 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 02:29:02 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 02:29:02 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 02:29:02 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 02:29:02 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 02:29:02 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 02:29:02 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 02:29:02 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/08 02:29:02 http.go:107: HTTPS: listening on [::]:9091\nI0408 02:29:02.224588       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 02:41:42.829 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-154-233.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 02:29:27 Watching directory: "/etc/alertmanager/config"\n
Apr 08 02:41:42.829 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-154-233.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 02:29:28 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:29:28 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 02:29:28 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 02:29:28 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 02:29:28 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 02:29:28 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:29:28 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 02:29:28 http.go:107: HTTPS: listening on [::]:9095\nI0408 02:29:28.214266       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 02:42:00.452 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-129-68.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T02:41:58.275Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T02:41:58.288Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T02:41:58.288Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T02:41:58.289Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T02:41:58.289Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T02:41:58.289Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T02:41:58.290Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T02:41:58.290Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T02:41:58.290Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T02:41:58.290Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T02:41:58.290Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T02:41:58.290Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T02:41:58.290Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T02:41:58.290Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T02:41:58.291Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T02:41:58.291Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 02:42:26.899 E ns/e2e-k8s-service-lb-available-9345 pod/service-test-sqs9q node/ip-10-0-154-233.ec2.internal container/netexec container exited with code 2 (Error): 
Apr 08 02:43:42.061 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Apr 08 02:44:24.313 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-134-80.ec2.internal" not ready since 2020-04-08 02:43:11 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-134-80.ec2.internal members are unhealthy,  members are unknown
Apr 08 02:44:34.805 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-80.ec2.internal node/ip-10-0-134-80.ec2.internal container/kube-controller-manager-recovery-controller container exited with code 1 (Error): WatchBookmarks=true&resourceVersion=27019&timeout=9m37s&timeoutSeconds=577&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0408 02:42:13.766219       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 02:42:13.766421       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:42:13.766616       1 csrcontroller.go:83] Shutting down CSR controller\nI0408 02:42:13.766937       1 csrcontroller.go:85] CSR controller shut down\nI0408 02:42:13.766629       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:42:13.766637       1 base_controller.go:101] Shutting down CertRotationController ...\nI0408 02:42:13.766656       1 reflector.go:181] Stopping reflector *unstructured.Unstructured (12h0m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:42:13.766673       1 base_controller.go:58] Shutting down worker of CertRotationController controller ...\nI0408 02:42:13.766998       1 base_controller.go:48] All CertRotationController workers have been terminated\nI0408 02:42:13.766694       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:42:13.766721       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:42:13.766783       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:42:13.766815       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:42:13.766839       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:42:13.766863       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\n
Apr 08 02:44:34.805 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-80.ec2.internal node/ip-10-0-134-80.ec2.internal container/cluster-policy-controller container exited with code 1 (Error): ments.apps)\nE0408 02:28:05.428111       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: unknown (get rolebindings.rbac.authorization.k8s.io)\nE0408 02:28:05.428208       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0408 02:28:05.428278       1 reflector.go:307] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to watch *v1.DeploymentConfig: unknown (get deploymentconfigs.apps.openshift.io)\nE0408 02:28:05.428373       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: unknown (get jobs.batch)\nE0408 02:28:05.428441       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: unknown (get ingresses.networking.k8s.io)\nE0408 02:28:05.428502       1 reflector.go:307] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: unknown (get routes.route.openshift.io)\nE0408 02:28:05.428591       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0408 02:28:05.428630       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: unknown (get ingresses.extensions)\nW0408 02:41:42.331367       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 457; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 02:41:42.331448       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 351; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 08 02:44:34.805 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-80.ec2.internal node/ip-10-0-134-80.ec2.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 2277       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:41:46.372747       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:41:47.302880       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:41:47.303831       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:41:56.382662       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:41:56.383128       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:41:57.320357       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:41:57.320952       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:42:06.391886       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:42:06.392210       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:42:07.332135       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:42:07.332531       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:42:13.926371       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:42:13.933019       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 08 02:44:34.805 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-80.ec2.internal node/ip-10-0-134-80.ec2.internal container/kube-controller-manager container exited with code 2 (Error): ver-6bd9f84b94-dbplf\nI0408 02:42:00.604885       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"d74c95e5-c973-46ec-b227-188875283c07", APIVersion:"apps/v1", ResourceVersion:"37652", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-6bd9f84b94 to 0\nI0408 02:42:00.626594       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-6bd9f84b94", UID:"05fe98f3-f10a-4ff0-be6d-6d4c2366e0cc", APIVersion:"apps/v1", ResourceVersion:"37801", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-6bd9f84b94-dbplf\nI0408 02:42:11.089767       1 controller.go:661] Detected change in list of current cluster nodes. New node set: map[ip-10-0-129-68.ec2.internal:{} ip-10-0-135-223.ec2.internal:{} ip-10-0-141-227.ec2.internal:{} ip-10-0-153-196.ec2.internal:{}]\nI0408 02:42:11.137470       1 aws_loadbalancer.go:1391] Instances removed from load-balancer ac4f39a6141fa4f9ca32cfaf9aa8e7f5\nI0408 02:42:11.463224       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"openshift-ingress", Name:"router-default", UID:"c4f39a61-41fa-4f9c-a32c-faf9aa8e7f52", APIVersion:"v1", ResourceVersion:"12560", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\nI0408 02:42:11.499760       1 aws_loadbalancer.go:1391] Instances removed from load-balancer a8aeb607573c9483dbe5ffe528bd5e1d\nI0408 02:42:11.710165       1 controller.go:669] Successfully updated 2 out of 2 load balancers to direct traffic to the updated set of nodes\nI0408 02:42:11.710566       1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"e2e-k8s-service-lb-available-9345", Name:"service-test", UID:"8aeb6075-73c9-483d-be5f-fe528bd5e1de", APIVersion:"v1", ResourceVersion:"22739", FieldPath:""}): type: 'Normal' reason: 'UpdatedLoadBalancer' Updated load balancer with new hosts\n
Apr 08 02:44:34.843 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-80.ec2.internal node/ip-10-0-134-80.ec2.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:41:54.665822       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:41:54.665982       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:41:56.674028       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:41:56.674058       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:41:58.695324       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:41:58.695352       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:42:00.705298       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:42:00.705404       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:42:02.717218       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:42:02.717246       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:42:04.727745       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:42:04.727932       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:42:06.739034       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:42:06.739065       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:42:08.753567       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:42:08.753594       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:42:10.765888       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:42:10.765917       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:42:12.781136       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:42:12.781164       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 08 02:44:34.843 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-80.ec2.internal node/ip-10-0-134-80.ec2.internal container/kube-scheduler container exited with code 2 (Error): ig-operator/etcd-quorum-guard-6496b44c7-9pkl7: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 02:41:55.096570       1 scheduler.go:728] pod openshift-monitoring/alertmanager-main-1 is bound successfully on node "ip-10-0-141-227.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0408 02:41:55.196896       1 scheduler.go:728] pod openshift-monitoring/prometheus-k8s-1 is bound successfully on node "ip-10-0-129-68.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0408 02:41:58.378013       1 factory.go:462] Unable to schedule openshift-apiserver/apiserver-544c884d8d-wj2wj: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 02:41:59.377649       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6496b44c7-9pkl7: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 02:42:08.378128       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6496b44c7-9pkl7: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 02:42:09.380702       1 factory.go:462] Unable to schedule openshift-apiserver/apiserver-544c884d8d-wj2wj: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Apr 08 02:44:34.864 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-80.ec2.internal node/ip-10-0-134-80.ec2.internal container/kube-apiserver container exited with code 1 (Error): .go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0408 02:42:14.240738       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0408 02:42:14.241358       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 02:42:14.241624       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nW0408 02:42:14.241791       1 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured\nI0408 02:42:14.297599       1 genericapiserver.go:648] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-134-80.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0408 02:42:14.326405       1 healthz.go:200] [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-StartOAuthInformers ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nhealthz check failed\n
Apr 08 02:44:34.864 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-80.ec2.internal node/ip-10-0-134-80.ec2.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0408 02:28:01.218124       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 08 02:44:34.864 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-80.ec2.internal node/ip-10-0-134-80.ec2.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 02:42:01.073274       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:42:01.073594       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 02:42:11.116673       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:42:11.117042       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 08 02:44:34.884 E ns/openshift-sdn pod/sdn-controller-4zctm node/ip-10-0-134-80.ec2.internal container/sdn-controller container exited with code 2 (Error): I0408 02:32:19.784893       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 08 02:44:34.920 E ns/openshift-multus pod/multus-admission-controller-pfvhw node/ip-10-0-134-80.ec2.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 08 02:44:34.944 E ns/openshift-multus pod/multus-djsqx node/ip-10-0-134-80.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 02:44:34.962 E ns/openshift-machine-config-operator pod/machine-config-daemon-d9lqc node/ip-10-0-134-80.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 02:44:34.979 E ns/openshift-sdn pod/ovs-dqvbs node/ip-10-0-134-80.ec2.internal container/openvswitch container exited with code 143 (Error): 912 on port 76\n2020-04-08T02:36:39.728Z|00096|connmgr|INFO|br0<->unix#327: 5 flow_mods in the last 0 s (5 adds)\n2020-04-08T02:36:39.780Z|00097|connmgr|INFO|br0<->unix#330: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:39:30.939Z|00098|connmgr|INFO|br0<->unix#453: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:39:30.971Z|00099|connmgr|INFO|br0<->unix#456: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:39:30.994Z|00100|bridge|INFO|bridge br0: deleted interface vethe93a986d on port 20\n2020-04-08T02:41:42.602Z|00101|connmgr|INFO|br0<->unix#553: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:41:42.702Z|00102|connmgr|INFO|br0<->unix#556: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:41:42.735Z|00103|bridge|INFO|bridge br0: deleted interface vetha979299f on port 66\n2020-04-08T02:41:47.499Z|00104|connmgr|INFO|br0<->unix#562: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:41:47.537Z|00105|connmgr|INFO|br0<->unix#565: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:41:47.577Z|00106|bridge|INFO|bridge br0: deleted interface vethb5565b34 on port 72\n2020-04-08T02:41:47.811Z|00107|connmgr|INFO|br0<->unix#568: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:41:47.898Z|00108|connmgr|INFO|br0<->unix#571: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:41:47.937Z|00109|bridge|INFO|bridge br0: deleted interface veth001cabd2 on port 75\n2020-04-08T02:41:50.473Z|00110|connmgr|INFO|br0<->unix#577: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:41:50.516Z|00111|connmgr|INFO|br0<->unix#580: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:41:50.543Z|00112|bridge|INFO|bridge br0: deleted interface vethdae6f2f6 on port 26\n2020-04-08T02:42:07.585Z|00113|connmgr|INFO|br0<->unix#596: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:42:07.614Z|00114|connmgr|INFO|br0<->unix#599: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:42:07.636Z|00115|bridge|INFO|bridge br0: deleted interface veth20989199 on port 71\n2020-04-08 02:42:13 info: Saving flows ...\nTerminated\n
Apr 08 02:44:35.012 E ns/openshift-controller-manager pod/controller-manager-hlrrv node/ip-10-0-134-80.ec2.internal container/controller-manager container exited with code 1 (Error): I0408 02:29:21.606682       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0408 02:29:21.608540       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-yzvgscmf/stable@sha256:baf34611b723ba5e9b3ead8872fed2c8af700156096054d720d42a057f5f24be"\nI0408 02:29:21.608565       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-yzvgscmf/stable@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0408 02:29:21.608688       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0408 02:29:21.608829       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 08 02:44:35.040 E ns/openshift-etcd pod/etcd-ip-10-0-134-80.ec2.internal node/ip-10-0-134-80.ec2.internal container/etcd-metrics container exited with code 2 (Error): s/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-134-80.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-134-80.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T02:24:40.577Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T02:24:40.578Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-134-80.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-134-80.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T02:24:40.582Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-08T02:24:40.582Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T02:24:40.582Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-08T02:24:40.583Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.134.80:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.134.80:9978: connect: connection refused\". Reconnecting..."}\n{"level":"warn","ts":"2020-04-08T02:24:41.585Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.134.80:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.134.80:9978: connect: connection refused\". Reconnecting..."}\n
Apr 08 02:44:35.057 E ns/openshift-cluster-node-tuning-operator pod/tuned-mc2gl node/ip-10-0-134-80.ec2.internal container/tuned container exited with code 143 (Error): nternal" added, tuned profile requested: openshift-control-plane\nI0408 02:28:51.729315   89176 tuned.go:169] disabling system tuned...\nI0408 02:28:51.751125   89176 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0408 02:28:52.674797   89176 tuned.go:392] getting recommended profile...\nI0408 02:28:52.890140   89176 tuned.go:419] active profile () != recommended profile (openshift-control-plane)\nI0408 02:28:52.890260   89176 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 02:28:52.890419   89176 tuned.go:285] starting tuned...\n2020-04-08 02:28:53,059 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 02:28:53,076 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 02:28:53,077 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 02:28:53,078 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-08 02:28:53,080 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-08 02:28:53,175 INFO     tuned.daemon.controller: starting controller\n2020-04-08 02:28:53,176 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 02:28:53,208 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 02:28:53,209 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 02:28:53,213 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 02:28:53,215 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 02:28:53,219 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 02:28:53,411 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 02:28:53,421 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Apr 08 02:44:35.142 E ns/openshift-monitoring pod/node-exporter-qfvgv node/ip-10-0-134-80.ec2.internal container/node-exporter container exited with code 143 (Error): -08T02:29:41Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:41Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 02:44:35.219 E ns/openshift-machine-config-operator pod/machine-config-server-5kq7z node/ip-10-0-134-80.ec2.internal container/machine-config-server container exited with code 2 (Error): I0408 02:41:40.300676       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0408 02:41:40.302303       1 api.go:51] Launching server on :22624\nI0408 02:41:40.302375       1 api.go:51] Launching server on :22623\n
Apr 08 02:44:44.867 E ns/openshift-machine-config-operator pod/machine-config-daemon-d9lqc node/ip-10-0-134-80.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 02:44:46.890 E ns/openshift-cluster-node-tuning-operator pod/tuned-gmf5w node/ip-10-0-154-233.ec2.internal container/tuned container exited with code 143 (Error):  tuned.go:469] profile "ip-10-0-154-233.ec2.internal" added, tuned profile requested: openshift-node\nI0408 02:28:37.701851   76816 tuned.go:169] disabling system tuned...\nI0408 02:28:37.708807   76816 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0408 02:28:38.682326   76816 tuned.go:392] getting recommended profile...\nI0408 02:28:38.967412   76816 tuned.go:419] active profile () != recommended profile (openshift-node)\nI0408 02:28:38.967539   76816 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 02:28:38.967611   76816 tuned.go:285] starting tuned...\n2020-04-08 02:28:39,178 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 02:28:39,185 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 02:28:39,185 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 02:28:39,186 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-08 02:28:39,187 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-08 02:28:39,221 INFO     tuned.daemon.controller: starting controller\n2020-04-08 02:28:39,221 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 02:28:39,232 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 02:28:39,233 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 02:28:39,236 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 02:28:39,238 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 02:28:39,240 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 02:28:39,369 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 02:28:39,375 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Apr 08 02:44:46.910 E ns/openshift-monitoring pod/node-exporter-n9lst node/ip-10-0-154-233.ec2.internal container/node-exporter container exited with code 143 (Error): -08T02:29:48Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:48Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 02:44:46.966 E ns/openshift-sdn pod/ovs-9gvrj node/ip-10-0-154-233.ec2.internal container/openvswitch container exited with code 1 (Error): w_mods in the last 0 s (4 deletes)\n2020-04-08T02:41:41.477Z|00135|bridge|INFO|bridge br0: deleted interface veth8d5509c4 on port 11\n2020-04-08T02:41:41.526Z|00136|connmgr|INFO|br0<->unix#519: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:41:41.572Z|00137|connmgr|INFO|br0<->unix#522: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:41:41.611Z|00138|bridge|INFO|bridge br0: deleted interface vethe0faba81 on port 6\n2020-04-08T02:41:41.662Z|00139|connmgr|INFO|br0<->unix#525: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:41:41.717Z|00140|connmgr|INFO|br0<->unix#528: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:41:41.749Z|00141|bridge|INFO|bridge br0: deleted interface veth91cc986a on port 7\n2020-04-08T02:41:41.732Z|00025|jsonrpc|WARN|unix#467: receive error: Connection reset by peer\n2020-04-08T02:41:41.732Z|00026|reconnect|WARN|unix#467: connection dropped (Connection reset by peer)\n2020-04-08T02:42:26.254Z|00142|connmgr|INFO|br0<->unix#561: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:42:26.288Z|00143|connmgr|INFO|br0<->unix#564: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:42:26.313Z|00144|bridge|INFO|bridge br0: deleted interface vethe8191060 on port 4\n2020-04-08T02:42:27.911Z|00145|connmgr|INFO|br0<->unix#568: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:42:27.948Z|00146|connmgr|INFO|br0<->unix#571: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:42:27.973Z|00147|bridge|INFO|bridge br0: deleted interface vethbbc2d09c on port 8\n2020-04-08T02:42:27.960Z|00027|jsonrpc|WARN|unix#506: receive error: Connection reset by peer\n2020-04-08T02:42:27.960Z|00028|reconnect|WARN|unix#506: connection dropped (Connection reset by peer)\n2020-04-08T02:42:28.489Z|00029|jsonrpc|WARN|unix#511: receive error: Connection reset by peer\n2020-04-08T02:42:28.489Z|00030|reconnect|WARN|unix#511: connection dropped (Connection reset by peer)\n2020-04-08 02:42:51 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 02:44:46.988 E ns/openshift-multus pod/multus-db784 node/ip-10-0-154-233.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 02:44:47.022 E ns/openshift-machine-config-operator pod/machine-config-daemon-qmdbh node/ip-10-0-154-233.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 02:44:55.475 E ns/openshift-machine-config-operator pod/machine-config-daemon-qmdbh node/ip-10-0-154-233.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 02:44:56.520 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Apr 08 02:45:03.783 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Apr 08 02:45:06.874 E ns/openshift-marketplace pod/community-operators-6bcf7594dc-wlz4s node/ip-10-0-129-68.ec2.internal container/community-operators container exited with code 2 (Error): 
Apr 08 02:45:07.814 E ns/openshift-monitoring pod/telemeter-client-844b88577f-7vhv2 node/ip-10-0-129-68.ec2.internal container/reload container exited with code 2 (Error): 
Apr 08 02:45:07.814 E ns/openshift-monitoring pod/telemeter-client-844b88577f-7vhv2 node/ip-10-0-129-68.ec2.internal container/telemeter-client container exited with code 2 (Error): 
Apr 08 02:45:08.130 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-129-68.ec2.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/04/08 02:41:59 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 08 02:45:08.130 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-129-68.ec2.internal container/prometheus-proxy container exited with code 2 (Error): 2020/04/08 02:41:59 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 02:41:59 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 02:41:59 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 02:41:59 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 02:41:59 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 02:41:59 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 02:41:59 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 02:41:59 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/08 02:41:59 http.go:107: HTTPS: listening on [::]:9091\nI0408 02:41:59.793731       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 02:45:00 oauthproxy.go:774: basicauth: 10.128.0.70:40032 Authorization header does not start with 'Basic', skipping basic authentication\n
Apr 08 02:45:08.130 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-129-68.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-08T02:41:59.142961365Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-08T02:41:59.145901665Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-08T02:42:04.312999677Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-08T02:42:04.313100323Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Apr 08 02:45:08.231 E ns/openshift-monitoring pod/prometheus-adapter-7bf86d6ff7-52294 node/ip-10-0-129-68.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0408 02:41:43.548318       1 adapter.go:93] successfully using in-cluster auth\nI0408 02:41:43.982618       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 08 02:45:08.266 E ns/openshift-monitoring pod/openshift-state-metrics-5859985d6f-wflpm node/ip-10-0-129-68.ec2.internal container/openshift-state-metrics container exited with code 2 (Error): 
Apr 08 02:45:08.947 E ns/openshift-monitoring pod/thanos-querier-6c48fb99cd-dm7t7 node/ip-10-0-129-68.ec2.internal container/oauth-proxy container exited with code 2 (Error): 2020/04/08 02:29:23 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 02:29:23 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 02:29:23 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 02:29:23 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 02:29:23 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 02:29:23 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 02:29:23 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 02:29:23 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0408 02:29:23.255173       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 02:29:23 http.go:107: HTTPS: listening on [::]:9091\n
Apr 08 02:45:44.186 E kube-apiserver failed contacting the API: Get https://api.ci-op-yzvgscmf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=41459&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp 3.231.122.136:6443: connect: connection refused
Apr 08 02:45:49.855 E ns/e2e-k8s-service-lb-available-9345 pod/service-test-drvbz node/ip-10-0-129-68.ec2.internal container/netexec container exited with code 2 (Error): 
Apr 08 02:48:00.038 E ns/openshift-cluster-node-tuning-operator pod/tuned-tz7z8 node/ip-10-0-129-68.ec2.internal container/tuned container exited with code 143 (Error): 37926, current: 28746\nI0408 02:43:39.086836   56688 trace.go:116] Trace[1747278511]: "Reflector ListAndWatch" name:github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:574 (started: 2020-04-08 02:43:00.059383477 +0000 UTC m=+816.231392932) (total time: 39.027418161s):\nTrace[1747278511]: [39.027418161s] [39.027418161s] END\nE0408 02:43:39.086868   56688 reflector.go:178] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:574: Failed to list *v1.Profile: Timeout: Too large resource version: 37926, current: 28746\nI0408 02:44:25.886455   56688 trace.go:116] Trace[817455089]: "Reflector ListAndWatch" name:github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:574 (started: 2020-04-08 02:43:46.858329679 +0000 UTC m=+863.030338837) (total time: 39.028084501s):\nTrace[817455089]: [39.028084501s] [39.028084501s] END\nE0408 02:44:25.886486   56688 reflector.go:178] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:574: Failed to list *v1.Profile: Timeout: Too large resource version: 37926, current: 28746\nI0408 02:45:13.262762   56688 trace.go:116] Trace[1006933274]: "Reflector ListAndWatch" name:github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:574 (started: 2020-04-08 02:44:42.757821767 +0000 UTC m=+918.929830926) (total time: 30.504887351s):\nTrace[1006933274]: [30.504849916s] [30.504849916s] Objects listed\nI0408 02:45:13.263111   56688 tuned.go:486] profile "ip-10-0-129-68.ec2.internal" changed, tuned profile requested: openshift-node\nI0408 02:45:13.839338   56688 tuned.go:392] getting recommended profile...\nI0408 02:45:14.013508   56688 tuned.go:428] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\nI0408 02:46:14.501707   56688 tuned.go:114] received signal: terminated\nI0408 02:46:14.501765   56688 tuned.go:326] sending TERM to PID 56812\n2020-04-08 02:46:14,505 INFO     tuned.daemon.controller: terminating controller\n2020-04-08 02:46:14,505 INFO     tuned.daemon.daemon: stopping tuning\n
Apr 08 02:48:00.055 E ns/openshift-monitoring pod/node-exporter-rvvlv node/ip-10-0-129-68.ec2.internal container/node-exporter container exited with code 143 (Error): -08T02:29:27Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:27Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 02:48:00.072 E ns/openshift-sdn pod/ovs-rz76b node/ip-10-0-129-68.ec2.internal container/openvswitch container exited with code 1 (Error): ce vethb80091ad on port 25\n2020-04-08T02:45:07.117Z|00137|connmgr|INFO|br0<->unix#762: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:45:07.192Z|00138|connmgr|INFO|br0<->unix#765: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:45:07.223Z|00139|bridge|INFO|bridge br0: deleted interface vethd2233907 on port 42\n2020-04-08T02:45:07.291Z|00140|connmgr|INFO|br0<->unix#768: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:45:07.370Z|00141|connmgr|INFO|br0<->unix#771: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:45:07.446Z|00142|bridge|INFO|bridge br0: deleted interface veth0a65d3b4 on port 39\n2020-04-08T02:45:07.497Z|00143|connmgr|INFO|br0<->unix#774: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:45:07.568Z|00144|connmgr|INFO|br0<->unix#777: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:45:07.614Z|00145|bridge|INFO|bridge br0: deleted interface veth6320fbfb on port 34\n2020-04-08T02:45:48.988Z|00013|jsonrpc|WARN|unix#703: receive error: Connection reset by peer\n2020-04-08T02:45:48.989Z|00014|reconnect|WARN|unix#703: connection dropped (Connection reset by peer)\n2020-04-08T02:45:48.993Z|00015|jsonrpc|WARN|unix#704: receive error: Connection reset by peer\n2020-04-08T02:45:48.994Z|00016|reconnect|WARN|unix#704: connection dropped (Connection reset by peer)\n2020-04-08T02:45:48.939Z|00146|connmgr|INFO|br0<->unix#813: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:45:48.977Z|00147|connmgr|INFO|br0<->unix#816: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:45:49.000Z|00148|bridge|INFO|bridge br0: deleted interface veth19789827 on port 41\n2020-04-08T02:45:50.614Z|00149|connmgr|INFO|br0<->unix#820: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:45:50.650Z|00150|connmgr|INFO|br0<->unix#823: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:45:50.672Z|00151|bridge|INFO|bridge br0: deleted interface veth3e43c0ba on port 40\n2020-04-08 02:46:14 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 02:48:00.103 E ns/openshift-multus pod/multus-w547t node/ip-10-0-129-68.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 02:48:00.135 E ns/openshift-machine-config-operator pod/machine-config-daemon-l42c6 node/ip-10-0-129-68.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 02:48:02.361 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-223.ec2.internal node/ip-10-0-135-223.ec2.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:45:24.430235       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:45:24.430264       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:45:26.445382       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:45:26.445488       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:45:28.505304       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:45:28.505340       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:45:30.507538       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:45:30.507570       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:45:32.539842       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:45:32.540583       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:45:34.624201       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:45:34.624233       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:45:36.597867       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:45:36.597908       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:45:38.613405       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:45:38.613460       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:45:40.626758       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:45:40.626787       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:45:42.636255       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:45:42.636282       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 08 02:48:02.361 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-223.ec2.internal node/ip-10-0-135-223.ec2.internal container/kube-scheduler container exited with code 2 (Error): tion::client-ca-file"]: "kube-csr-signer_@1586311232" [] issuer="kubelet-signer" (2020-04-08 02:00:31 +0000 UTC to 2020-04-09 01:46:26 +0000 UTC (now=2020-04-08 02:24:06.479927064 +0000 UTC))\nI0408 02:24:06.480693       1 tlsconfig.go:200] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1586311234" (2020-04-08 02:00:44 +0000 UTC to 2022-04-08 02:00:45 +0000 UTC (now=2020-04-08 02:24:06.480669561 +0000 UTC))\nI0408 02:24:06.515801       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1586312642" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1586312641" (2020-04-08 01:24:00 +0000 UTC to 2021-04-08 01:24:00 +0000 UTC (now=2020-04-08 02:24:06.515769545 +0000 UTC))\nI0408 02:24:06.523501       1 node_tree.go:86] Added node "ip-10-0-129-68.ec2.internal" in group "us-east-1:\x00:us-east-1b" to NodeTree\nI0408 02:24:06.523734       1 node_tree.go:86] Added node "ip-10-0-134-80.ec2.internal" in group "us-east-1:\x00:us-east-1b" to NodeTree\nI0408 02:24:06.523975       1 node_tree.go:86] Added node "ip-10-0-135-223.ec2.internal" in group "us-east-1:\x00:us-east-1b" to NodeTree\nI0408 02:24:06.524110       1 node_tree.go:86] Added node "ip-10-0-141-227.ec2.internal" in group "us-east-1:\x00:us-east-1b" to NodeTree\nI0408 02:24:06.524234       1 node_tree.go:86] Added node "ip-10-0-153-196.ec2.internal" in group "us-east-1:\x00:us-east-1c" to NodeTree\nI0408 02:24:06.524355       1 node_tree.go:86] Added node "ip-10-0-154-233.ec2.internal" in group "us-east-1:\x00:us-east-1c" to NodeTree\nI0408 02:24:06.571699       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n
Apr 08 02:48:02.428 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-223.ec2.internal node/ip-10-0-135-223.ec2.internal container/kube-apiserver container exited with code 1 (Error): 3.953769       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0408 02:45:43.953945       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0408 02:45:43.954103       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0408 02:45:43.954281       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0408 02:45:43.954466       1 watcher.go:197] watch chan error: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field\nI0408 02:45:43.954599       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0408 02:45:43.954765       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0408 02:45:43.954976       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0408 02:45:43.955215       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0408 02:45:43.955397       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0408 02:45:43.955729       1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0408 02:45:43.955793       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0408 02:45:43.955967       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0408 02:45:43.956058       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0408 02:45:43.956209       1 watcher.go:197] watch chan error: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field\n
Apr 08 02:48:02.428 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-223.ec2.internal node/ip-10-0-135-223.ec2.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0408 02:24:01.987353       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 08 02:48:02.428 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-223.ec2.internal node/ip-10-0-135-223.ec2.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 02:45:31.390859       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:45:31.392105       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 02:45:41.405742       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:45:41.406251       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 08 02:48:02.503 E ns/openshift-cluster-node-tuning-operator pod/tuned-8f8wq node/ip-10-0-135-223.ec2.internal container/tuned container exited with code 143 (Error): o:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 02:30:08.309943   95917 tuned.go:285] starting tuned...\n2020-04-08 02:30:08,447 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 02:30:08,458 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 02:30:08,459 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 02:30:08,460 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-08 02:30:08,461 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-08 02:30:08,514 INFO     tuned.daemon.controller: starting controller\n2020-04-08 02:30:08,514 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 02:30:08,525 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 02:30:08,526 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 02:30:08,530 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 02:30:08,532 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 02:30:08,534 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 02:30:08,692 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 02:30:08,714 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0408 02:45:13.291878   95917 tuned.go:486] profile "ip-10-0-135-223.ec2.internal" changed, tuned profile requested: openshift-node\nI0408 02:45:13.533021   95917 tuned.go:486] profile "ip-10-0-135-223.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0408 02:45:14.159328   95917 tuned.go:392] getting recommended profile...\nI0408 02:45:14.973430   95917 tuned.go:428] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Apr 08 02:48:02.544 E ns/openshift-sdn pod/sdn-controller-dw8db node/ip-10-0-135-223.ec2.internal container/sdn-controller container exited with code 2 (Error): d not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"e9c9d346-83f0-4389-9766-b33eadb88adb", ResourceVersion:"32773", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721907818, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-135-223\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-08T01:56:58Z\",\"renewTime\":\"2020-04-08T02:31:35Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"openshift-sdn-controller", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0004f6cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0004f6ce0)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-135-223 became leader'\nI0408 02:31:35.763297       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0408 02:31:35.777912       1 master.go:51] Initializing SDN master\nI0408 02:31:35.887218       1 network_controller.go:61] Started OpenShift Network Controller\nI0408 02:42:14.438771       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 02:42:14.438842       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0408 02:42:14.439026       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Apr 08 02:48:02.583 E ns/openshift-sdn pod/ovs-vn9wx node/ip-10-0-135-223.ec2.internal container/openvswitch container exited with code 1 (Error): 560ed on port 10\n2020-04-08T02:45:26.248Z|00150|bridge|INFO|bridge br0: added interface vethe3ee5826 on port 86\n2020-04-08T02:45:26.294Z|00151|connmgr|INFO|br0<->unix#747: 5 flow_mods in the last 0 s (5 adds)\n2020-04-08T02:45:26.361Z|00152|connmgr|INFO|br0<->unix#751: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:45:26.363Z|00153|connmgr|INFO|br0<->unix#753: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-08T02:45:26.312Z|00011|jsonrpc|WARN|unix#647: receive error: Connection reset by peer\n2020-04-08T02:45:26.312Z|00012|reconnect|WARN|unix#647: connection dropped (Connection reset by peer)\n2020-04-08T02:45:29.651Z|00154|connmgr|INFO|br0<->unix#759: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:45:29.685Z|00155|connmgr|INFO|br0<->unix#762: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:45:29.714Z|00156|bridge|INFO|bridge br0: deleted interface veth2bf4a973 on port 82\n2020-04-08T02:45:30.182Z|00157|connmgr|INFO|br0<->unix#765: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:45:30.214Z|00158|connmgr|INFO|br0<->unix#768: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:45:30.239Z|00159|bridge|INFO|bridge br0: deleted interface vethe3ee5826 on port 86\n2020-04-08T02:45:32.953Z|00160|connmgr|INFO|br0<->unix#771: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:45:32.992Z|00161|connmgr|INFO|br0<->unix#774: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:45:33.015Z|00162|bridge|INFO|bridge br0: deleted interface vethb3d9e71f on port 79\n2020-04-08T02:45:42.277Z|00163|bridge|INFO|bridge br0: added interface vethf8dd4168 on port 87\n2020-04-08T02:45:42.331Z|00164|connmgr|INFO|br0<->unix#783: 5 flow_mods in the last 0 s (5 adds)\n2020-04-08T02:45:42.460Z|00165|connmgr|INFO|br0<->unix#788: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-08T02:45:42.461Z|00166|connmgr|INFO|br0<->unix#789: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08 02:45:43 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 02:48:02.613 E ns/openshift-controller-manager pod/controller-manager-zj5lr node/ip-10-0-135-223.ec2.internal container/controller-manager container exited with code 1 (Error): I0408 02:29:24.980437       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0408 02:29:24.982968       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-yzvgscmf/stable@sha256:baf34611b723ba5e9b3ead8872fed2c8af700156096054d720d42a057f5f24be"\nI0408 02:29:24.983156       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-yzvgscmf/stable@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0408 02:29:24.983101       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0408 02:29:24.984022       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 08 02:48:02.661 E ns/openshift-multus pod/multus-fs8lv node/ip-10-0-135-223.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 02:48:02.685 E ns/openshift-multus pod/multus-admission-controller-b4x4g node/ip-10-0-135-223.ec2.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 08 02:48:02.741 E ns/openshift-machine-config-operator pod/machine-config-daemon-cbdfc node/ip-10-0-135-223.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 02:48:02.782 E ns/openshift-cluster-version pod/cluster-version-operator-7f784ff457-h29q4 node/ip-10-0-135-223.ec2.internal container/cluster-version-operator container exited with code 255 (Error): 8 of 565)\nI0408 02:45:42.133873       1 sync_worker.go:634] Done syncing for service "openshift-etcd-operator/metrics" (58 of 565)\nI0408 02:45:42.133940       1 sync_worker.go:621] Running sync for configmap "openshift-etcd-operator/etcd-operator-config" (59 of 565)\nI0408 02:45:42.194271       1 sync_worker.go:634] Done syncing for configmap "openshift-etcd-operator/etcd-operator-config" (59 of 565)\nI0408 02:45:42.194417       1 sync_worker.go:621] Running sync for configmap "openshift-etcd-operator/etcd-ca-bundle" (60 of 565)\nI0408 02:45:42.245214       1 sync_worker.go:634] Done syncing for configmap "openshift-etcd-operator/etcd-ca-bundle" (60 of 565)\nI0408 02:45:42.246560       1 sync_worker.go:621] Running sync for secret "openshift-etcd-operator/etcd-client" (61 of 565)\nI0408 02:45:42.281507       1 sync_worker.go:634] Done syncing for secret "openshift-etcd-operator/etcd-client" (61 of 565)\nI0408 02:45:42.281720       1 sync_worker.go:621] Running sync for clusterrolebinding "system:openshift:operator:etcd-operator" (62 of 565)\nI0408 02:45:42.382171       1 sync_worker.go:634] Done syncing for clusterrolebinding "system:openshift:operator:etcd-operator" (62 of 565)\nI0408 02:45:42.382232       1 sync_worker.go:621] Running sync for serviceaccount "openshift-etcd-operator/etcd-operator" (63 of 565)\nI0408 02:45:42.437326       1 sync_worker.go:634] Done syncing for serviceaccount "openshift-etcd-operator/etcd-operator" (63 of 565)\nI0408 02:45:42.437407       1 sync_worker.go:621] Running sync for deployment "openshift-etcd-operator/etcd-operator" (64 of 565)\nI0408 02:45:42.585967       1 sync_worker.go:634] Done syncing for deployment "openshift-etcd-operator/etcd-operator" (64 of 565)\nI0408 02:45:42.586118       1 sync_worker.go:621] Running sync for clusteroperator "etcd" (65 of 565)\nI0408 02:45:43.714569       1 start.go:140] Shutting down due to terminated\nI0408 02:45:43.981524       1 task_graph.go:568] Canceled worker 6\nF0408 02:45:43.981669       1 start.go:148] Received shutdown signal twice, exiting\n
Apr 08 02:48:02.805 E ns/openshift-monitoring pod/node-exporter-xgf9j node/ip-10-0-135-223.ec2.internal container/node-exporter container exited with code 143 (Error): -08T02:29:06Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:06Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 02:48:02.860 E ns/openshift-machine-config-operator pod/machine-config-server-xz6zj node/ip-10-0-135-223.ec2.internal container/machine-config-server container exited with code 2 (Error): I0408 02:41:36.565330       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0408 02:41:36.566404       1 api.go:51] Launching server on :22624\nI0408 02:41:36.566477       1 api.go:51] Launching server on :22623\n
Apr 08 02:48:02.890 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-223.ec2.internal node/ip-10-0-135-223.ec2.internal container/cluster-policy-controller container exited with code 1 (Error): I0408 02:25:48.600665       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0408 02:25:48.603834       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0408 02:25:48.606864       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0408 02:25:48.607546       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Apr 08 02:48:02.890 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-223.ec2.internal node/ip-10-0-135-223.ec2.internal container/kube-controller-manager container exited with code 2 (Error):    1 replica_set.go:226] Found 5 related ReplicaSets for ReplicaSet openshift-operator-lifecycle-manager/packageserver-8649dcdf8f: packageserver-647bbfb4dd, packageserver-6bd9f84b94, packageserver-84b4bd9b78, packageserver-78568694, packageserver-8649dcdf8f\nI0408 02:45:42.035523       1 controller_utils.go:604] Controller packageserver-8649dcdf8f deleting pod openshift-operator-lifecycle-manager/packageserver-8649dcdf8f-nfxg5\nI0408 02:45:42.037535       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"d74c95e5-c973-46ec-b227-188875283c07", APIVersion:"apps/v1", ResourceVersion:"41242", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-8649dcdf8f to 0\nI0408 02:45:42.055660       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-8649dcdf8f", UID:"eb70c13f-17d9-422a-8ec2-43dcf0ac2f0f", APIVersion:"apps/v1", ResourceVersion:"41419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-8649dcdf8f-nfxg5\nI0408 02:45:42.068505       1 replica_set.go:562] Too few replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-647bbfb4dd, need 2, creating 1\nI0408 02:45:42.072223       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"d74c95e5-c973-46ec-b227-188875283c07", APIVersion:"apps/v1", ResourceVersion:"41421", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set packageserver-647bbfb4dd to 2\nI0408 02:45:42.172487       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-647bbfb4dd", UID:"fec9a1a4-7d81-48c7-8f7f-225514a906f5", APIVersion:"apps/v1", ResourceVersion:"41425", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-647bbfb4dd-nrs8w\n
Apr 08 02:48:02.890 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-223.ec2.internal node/ip-10-0-135-223.ec2.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 9040       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:45:10.969512       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:45:16.428320       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:45:16.428759       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:45:20.993839       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:45:20.994282       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:45:26.441561       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:45:26.441977       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:45:31.026865       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:45:31.027417       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:45:36.461003       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:45:36.461381       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:45:41.035103       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:45:41.035542       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 08 02:48:06.026 E ns/openshift-etcd pod/etcd-ip-10-0-135-223.ec2.internal node/ip-10-0-135-223.ec2.internal container/etcd-metrics container exited with code 2 (Error): ll-serving-metrics/etcd-serving-metrics-ip-10-0-135-223.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-135-223.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T02:22:28.550Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T02:22:28.550Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-135-223.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-135-223.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T02:22:28.556Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"warn","ts":"2020-04-08T02:22:28.557Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.135.223:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.135.223:9978: connect: connection refused\". Reconnecting..."}\n{"level":"info","ts":"2020-04-08T02:22:28.557Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T02:22:28.557Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-08T02:22:29.558Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.135.223:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.135.223:9978: connect: connection refused\". Reconnecting..."}\n
Apr 08 02:48:07.890 E ns/openshift-machine-config-operator pod/machine-config-daemon-l42c6 node/ip-10-0-129-68.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 02:48:13.826 E ns/openshift-machine-config-operator pod/machine-config-daemon-cbdfc node/ip-10-0-135-223.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 02:48:18.432 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-795449c768-r8lpv node/ip-10-0-141-227.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Apr 08 02:48:18.466 E ns/openshift-monitoring pod/prometheus-adapter-7bf86d6ff7-6rdms node/ip-10-0-141-227.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0408 02:29:01.773606       1 adapter.go:93] successfully using in-cluster auth\nI0408 02:29:02.338625       1 secure_serving.go:116] Serving securely on [::]:6443\nW0408 02:45:44.132925       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:133: Unexpected watch close - watch lasted less than a second and no items received\n
Apr 08 02:48:18.490 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-227.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T02:29:45.482Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T02:29:45.489Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T02:29:45.489Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T02:29:45.490Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T02:29:45.490Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T02:29:45.491Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T02:29:45.491Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T02:29:45.491Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T02:29:45.492Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T02:29:45.492Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 02:48:19.452 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-141-227.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 02:29:09 Watching directory: "/etc/alertmanager/config"\n
Apr 08 02:48:19.452 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-141-227.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 02:29:10 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:29:10 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 02:29:10 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 02:29:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 02:29:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 02:29:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:29:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 02:29:10 http.go:107: HTTPS: listening on [::]:9095\nI0408 02:29:10.321327       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 02:48:19.487 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-227.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 02:41:57 Watching directory: "/etc/alertmanager/config"\n
Apr 08 02:48:19.487 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-227.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 02:41:57 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:41:57 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 02:41:57 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 02:41:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 02:41:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 02:41:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 02:41:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0408 02:41:57.376750       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 02:41:57 http.go:107: HTTPS: listening on [::]:9095\n
Apr 08 02:48:22.833 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Apr 08 02:48:33.220 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-68.ec2.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T02:48:31.299Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T02:48:31.309Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T02:48:31.309Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T02:48:31.310Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T02:48:31.310Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T02:48:31.310Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T02:48:31.310Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T02:48:31.310Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T02:48:31.310Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T02:48:31.310Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T02:48:31.310Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T02:48:31.310Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T02:48:31.310Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T02:48:31.311Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T02:48:31.312Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T02:48:31.312Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 02:48:38.036 E ns/openshift-cluster-machine-approver pod/machine-approver-7c46d7765f-x798c node/ip-10-0-153-196.ec2.internal container/machine-approver-controller container exited with code 2 (Error): no such file or directory\nI0408 02:45:18.183104       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0408 02:45:18.183229       1 main.go:238] Starting Machine Approver\nI0408 02:45:18.183706       1 reflector.go:175] Starting reflector *v1beta1.CertificateSigningRequest (0s) from github.com/openshift/cluster-machine-approver/main.go:240\nI0408 02:45:18.283492       1 main.go:148] CSR csr-q8h6c added\nI0408 02:45:18.283531       1 main.go:151] CSR csr-q8h6c is already approved\nI0408 02:45:18.283553       1 main.go:148] CSR csr-w5rl6 added\nI0408 02:45:18.283564       1 main.go:151] CSR csr-w5rl6 is already approved\nI0408 02:45:18.283576       1 main.go:148] CSR csr-wn96z added\nI0408 02:45:18.283587       1 main.go:151] CSR csr-wn96z is already approved\nI0408 02:45:18.283599       1 main.go:148] CSR csr-24dnm added\nI0408 02:45:18.283609       1 main.go:151] CSR csr-24dnm is already approved\nI0408 02:45:18.283621       1 main.go:148] CSR csr-7b99d added\nI0408 02:45:18.283631       1 main.go:151] CSR csr-7b99d is already approved\nI0408 02:45:18.283643       1 main.go:148] CSR csr-jx8kz added\nI0408 02:45:18.283653       1 main.go:151] CSR csr-jx8kz is already approved\nI0408 02:45:18.283666       1 main.go:148] CSR csr-lt4xk added\nI0408 02:45:18.283691       1 main.go:151] CSR csr-lt4xk is already approved\nI0408 02:45:18.283704       1 main.go:148] CSR csr-lv5tl added\nI0408 02:45:18.283714       1 main.go:151] CSR csr-lv5tl is already approved\nI0408 02:45:18.283729       1 main.go:148] CSR csr-6dbhs added\nI0408 02:45:18.283784       1 main.go:151] CSR csr-6dbhs is already approved\nI0408 02:45:18.283798       1 main.go:148] CSR csr-9lp6t added\nI0408 02:45:18.283808       1 main.go:151] CSR csr-9lp6t is already approved\nI0408 02:45:18.283820       1 main.go:148] CSR csr-bbb4d added\nI0408 02:45:18.283830       1 main.go:151] CSR csr-bbb4d is already approved\nI0408 02:45:18.283842       1 main.go:148] CSR csr-rpjf6 added\nI0408 02:45:18.283852       1 main.go:151] CSR csr-rpjf6 is already approved\n
Apr 08 02:48:38.197 E ns/openshift-machine-api pod/machine-api-controllers-5684dccb99-74gmq node/ip-10-0-153-196.ec2.internal container/machineset-controller container exited with code 1 (Error): 
Apr 08 02:48:39.890 E ns/openshift-machine-api pod/machine-api-operator-6c465c6ccb-bf897 node/ip-10-0-153-196.ec2.internal container/machine-api-operator container exited with code 2 (Error): 
Apr 08 02:48:41.682 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-778f9d59d4-xsrr9 node/ip-10-0-153-196.ec2.internal container/operator container exited with code 1 (Error):   1 httplog.go:90] verb="GET" URI="/metrics" latency=4.229324ms resp=200 UserAgent="Prometheus/2.15.2" srcIP="10.128.2.22:46404": \nI0408 02:47:56.298563       1 request.go:557] Throttling request took 139.153397ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0408 02:47:56.498579       1 request.go:557] Throttling request took 193.955443ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0408 02:48:09.656422       1 httplog.go:90] verb="GET" URI="/metrics" latency=16.36311ms resp=200 UserAgent="Prometheus/2.15.2" srcIP="10.131.0.24:46078": \nI0408 02:48:16.300008       1 request.go:557] Throttling request took 121.867518ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0408 02:48:16.500012       1 request.go:557] Throttling request took 194.649546ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0408 02:48:36.084835       1 reflector.go:336] k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125: forcing resync\nI0408 02:48:36.124857       1 reflector.go:336] k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125: forcing resync\nI0408 02:48:36.573630       1 request.go:557] Throttling request took 59.233215ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0408 02:48:36.856481       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 02:48:36.857359       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:48:36.857435       1 builder.go:219] server exited\nW0408 02:48:36.857496       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\n
Apr 08 02:48:43.057 E ns/openshift-machine-config-operator pod/machine-config-operator-7f7b56cdd7-fmxck node/ip-10-0-153-196.ec2.internal container/machine-config-operator container exited with code 2 (Error): 133\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-04-08T02:43:45Z\",\"renewTime\":\"2020-04-08T02:43:45Z\",\"leaderTransitions\":2}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"machine-config-operator", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00035a9e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00035aa00)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-7f7b56cdd7-fmxck_4035f3ce-6059-45d0-ab72-1b259fce7133 became leader'\nI0408 02:43:45.705685       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0408 02:43:46.345550       1 operator.go:264] Starting MachineConfigOperator\nE0408 02:45:44.193088       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps?allowWatchBookmarks=true&resourceVersion=41117&timeout=5m23s&timeoutSeconds=323&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0408 02:45:44.201505       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts?allowWatchBookmarks=true&resourceVersion=26199&timeout=5m51s&timeoutSeconds=351&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0408 02:45:44.206147       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=41463&timeout=6m23s&timeoutSeconds=383&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n
Apr 08 02:48:44.353 E ns/openshift-service-ca-operator pod/service-ca-operator-6d4fc7486d-njxjx node/ip-10-0-153-196.ec2.internal container/operator container exited with code 1 (Error): 
Apr 08 02:48:47.541 E ns/e2e-k8s-sig-apps-job-upgrade-9931 pod/foo-lfvkb node/ip-10-0-141-227.ec2.internal container/c container exited with code 137 (Error): 
Apr 08 02:48:47.586 E ns/e2e-k8s-sig-apps-job-upgrade-9931 pod/foo-2swss node/ip-10-0-141-227.ec2.internal container/c container exited with code 137 (Error): 
Apr 08 02:49:03.541 E ns/openshift-console pod/console-7dc785574c-6kmm5 node/ip-10-0-153-196.ec2.internal container/console container exited with code 2 (Error): 2020-04-08T02:29:48Z cmd/main: cookies are secure!\n2020-04-08T02:29:48Z cmd/main: Binding to [::]:8443...\n2020-04-08T02:29:48Z cmd/main: using TLS\n
Apr 08 02:49:18.239 E ns/openshift-machine-api pod/machine-api-controllers-5684dccb99-fn9m9 node/ip-10-0-134-80.ec2.internal container/nodelink-controller container exited with code 255 (Error): 
Apr 08 02:49:26.141 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: reconciling Grafana Dashboard Definitions ConfigMaps failed: retrieving ConfigMap object failed: Get https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/grafana-dashboard-node-rsrc-use: unexpected EOF
Apr 08 02:49:37.310 E ns/openshift-marketplace pod/certified-operators-7d8bfd8d4-vg79q node/ip-10-0-154-233.ec2.internal container/certified-operators container exited with code 2 (Error): 
Apr 08 02:49:40.311 E ns/openshift-marketplace pod/community-operators-6bcf7594dc-mbnfs node/ip-10-0-154-233.ec2.internal container/community-operators container exited with code 2 (Error): 
Apr 08 02:51:13.563 E ns/openshift-monitoring pod/node-exporter-nj8jw node/ip-10-0-141-227.ec2.internal container/node-exporter container exited with code 143 (Error): -08T02:28:50Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:28:50Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 02:51:13.581 E ns/openshift-cluster-node-tuning-operator pod/tuned-ng4kg node/ip-10-0-141-227.ec2.internal container/tuned container exited with code 143 (Error): nnect: connection refused\nI0408 02:45:44.797638   47273 tuned.go:527] tuned "rendered" changed\nI0408 02:45:44.797670   47273 tuned.go:218] extracting tuned profiles\nI0408 02:45:45.208153   47273 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 02:45:45.208185   47273 tuned.go:356] reloading tuned...\nI0408 02:45:45.208192   47273 tuned.go:359] sending HUP to PID 47449\n2020-04-08 02:45:45,208 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-08 02:45:45,620 INFO     tuned.daemon.daemon: terminating Tuned, rolling back all changes\n2020-04-08 02:45:45,631 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 02:45:45,631 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-08 02:45:45,632 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-08 02:45:45,662 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 02:45:45,665 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 02:45:45,665 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 02:45:45,668 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 02:45:45,671 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 02:45:45,672 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 02:45:45,675 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 02:45:45,684 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0408 02:45:45.999559   47273 tuned.go:486] profile "ip-10-0-141-227.ec2.internal" changed, tuned profile requested: openshift-node\nI0408 02:45:46.208194   47273 tuned.go:392] getting recommended profile...\nI0408 02:45:46.351008   47273 tuned.go:428] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\n
Apr 08 02:51:13.613 E ns/openshift-multus pod/multus-gqmfq node/ip-10-0-141-227.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 02:51:13.648 E ns/openshift-sdn pod/ovs-s5p6j node/ip-10-0-141-227.ec2.internal container/openvswitch container exited with code 1 (Error): ce veth199980bb on port 22\n2020-04-08T02:48:18.748Z|00107|connmgr|INFO|br0<->unix#768: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:48:18.791Z|00108|connmgr|INFO|br0<->unix#771: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:48:18.814Z|00109|bridge|INFO|bridge br0: deleted interface vetha706d35a on port 26\n2020-04-08T02:48:46.854Z|00110|connmgr|INFO|br0<->unix#793: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:48:46.882Z|00111|connmgr|INFO|br0<->unix#796: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:48:46.903Z|00112|bridge|INFO|bridge br0: deleted interface veth9841a51a on port 13\n2020-04-08T02:48:46.935Z|00113|connmgr|INFO|br0<->unix#799: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:48:46.976Z|00114|connmgr|INFO|br0<->unix#802: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:48:46.998Z|00115|bridge|INFO|bridge br0: deleted interface veth0861506d on port 14\n2020-04-08T02:49:02.530Z|00116|connmgr|INFO|br0<->unix#817: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:49:02.582Z|00015|jsonrpc|WARN|unix#723: receive error: Connection reset by peer\n2020-04-08T02:49:02.582Z|00016|reconnect|WARN|unix#723: connection dropped (Connection reset by peer)\n2020-04-08T02:49:02.586Z|00017|jsonrpc|WARN|unix#724: receive error: Connection reset by peer\n2020-04-08T02:49:02.586Z|00018|reconnect|WARN|unix#724: connection dropped (Connection reset by peer)\n2020-04-08T02:49:02.571Z|00117|connmgr|INFO|br0<->unix#820: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:49:02.593Z|00118|bridge|INFO|bridge br0: deleted interface veth1cae9c14 on port 16\n2020-04-08T02:49:04.363Z|00119|connmgr|INFO|br0<->unix#825: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:49:04.398Z|00120|connmgr|INFO|br0<->unix#828: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:49:04.422Z|00121|bridge|INFO|bridge br0: deleted interface veth40ef0da4 on port 21\n2020-04-08 02:49:28 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 02:51:13.684 E ns/openshift-machine-config-operator pod/machine-config-daemon-gxxrl node/ip-10-0-141-227.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 02:51:22.697 E ns/openshift-machine-config-operator pod/machine-config-daemon-gxxrl node/ip-10-0-141-227.ec2.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 02:51:34.349 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-196.ec2.internal node/ip-10-0-153-196.ec2.internal container/kube-controller-manager-recovery-controller container exited with code 1 (Error): ertRotationController \nI0408 02:42:20.779346       1 base_controller.go:54] Starting #1 worker of CertRotationController controller ...\nI0408 02:49:13.410456       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 02:49:13.420267       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:49:13.420421       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:49:13.420485       1 csrcontroller.go:83] Shutting down CSR controller\nI0408 02:49:13.420519       1 csrcontroller.go:85] CSR controller shut down\nI0408 02:49:13.420576       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:49:13.420643       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:49:13.420714       1 base_controller.go:101] Shutting down CertRotationController ...\nI0408 02:49:13.420781       1 base_controller.go:58] Shutting down worker of CertRotationController controller ...\nI0408 02:49:13.420819       1 base_controller.go:48] All CertRotationController workers have been terminated\nI0408 02:49:13.420953       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:49:13.421041       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:49:13.421135       1 reflector.go:181] Stopping reflector *unstructured.Unstructured (12h0m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:49:13.421210       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0408 02:49:13.421310       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\n
Apr 08 02:51:34.349 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-196.ec2.internal node/ip-10-0-153-196.ec2.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 6302       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:48:41.276695       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:48:46.998183       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:48:46.998661       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:48:51.295936       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:48:51.296319       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:48:57.007838       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:48:57.008185       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:49:01.305990       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:49:01.306650       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:49:07.030886       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:49:07.031320       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 02:49:11.324914       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:49:11.325256       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 08 02:51:34.349 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-196.ec2.internal node/ip-10-0-153-196.ec2.internal container/kube-controller-manager container exited with code 2 (Error): ted pod: oauth-openshift-6658c9d558-m7f87\nI0408 02:49:07.987645       1 replica_set.go:562] Too few replicas for ReplicaSet openshift-authentication/oauth-openshift-79dd764bc, need 2, creating 1\nI0408 02:49:07.998054       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication", Name:"oauth-openshift", UID:"c66afcb4-0ebf-4bf4-9d7a-5b969f90f258", APIVersion:"apps/v1", ResourceVersion:"44395", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set oauth-openshift-79dd764bc to 2\nI0408 02:49:08.111024       1 deployment_controller.go:485] Error syncing deployment openshift-authentication/oauth-openshift: Operation cannot be fulfilled on deployments.apps "oauth-openshift": the object has been modified; please apply your changes to the latest version and try again\nI0408 02:49:08.288202       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-authentication", Name:"oauth-openshift-79dd764bc", UID:"2e3eb95c-0dd6-42f2-b6d4-65b4d7c3aad8", APIVersion:"apps/v1", ResourceVersion:"45156", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: oauth-openshift-79dd764bc-bzhpd\nI0408 02:49:08.519947       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/telemeter-client: Operation cannot be fulfilled on deployments.apps "telemeter-client": the object has been modified; please apply your changes to the latest version and try again\nI0408 02:49:11.558157       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/thanos-querier: Operation cannot be fulfilled on deployments.apps "thanos-querier": the object has been modified; please apply your changes to the latest version and try again\nI0408 02:49:12.406438       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/prometheus-adapter: Operation cannot be fulfilled on deployments.apps "prometheus-adapter": the object has been modified; please apply your changes to the latest version and try again\n
Apr 08 02:51:34.349 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-196.ec2.internal node/ip-10-0-153-196.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): resourceVersion=45215&timeout=9m33s&timeoutSeconds=573&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0408 02:49:51.763496       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=45256&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0408 02:49:51.764732       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:6443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=45259&timeout=5m32s&timeoutSeconds=332&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0408 02:49:51.768281       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=41466&timeout=9m47s&timeoutSeconds=587&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0408 02:49:51.769335       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=45253&timeout=5m11s&timeoutSeconds=311&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0408 02:49:51.794564       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=41466&timeout=6m29s&timeoutSeconds=389&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0408 02:49:52.030548       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0408 02:49:52.030615       1 policy_controller.go:94] leaderelection lost\nI0408 02:49:52.037858       1 clusterquotamapping.go:142] Shutting down ClusterQuotaMappingController controller\n
Apr 08 02:51:34.401 E ns/openshift-monitoring pod/node-exporter-jss5t node/ip-10-0-153-196.ec2.internal container/node-exporter container exited with code 143 (Error): -08T02:29:12Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T02:29:12Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 02:51:34.418 E ns/openshift-cluster-node-tuning-operator pod/tuned-spsg8 node/ip-10-0-153-196.ec2.internal container/tuned container exited with code 143 (Error): 47.504952   92878 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 02:45:47.504964   92878 tuned.go:356] reloading tuned...\nI0408 02:45:47.504974   92878 tuned.go:359] sending HUP to PID 92927\n2020-04-08 02:45:47,505 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-08 02:45:47,939 INFO     tuned.daemon.daemon: terminating Tuned, rolling back all changes\n2020-04-08 02:45:47,950 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 02:45:47,951 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-08 02:45:47,952 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-08 02:45:48,120 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 02:45:48,129 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 02:45:48,134 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 02:45:48,142 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 02:45:48,144 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 02:45:48,163 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 02:45:48,188 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 02:45:48,207 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0408 02:48:43.503263   92878 tuned.go:486] profile "ip-10-0-153-196.ec2.internal" changed, tuned profile requested: openshift-node\nI0408 02:48:43.528165   92878 tuned.go:392] getting recommended profile...\nI0408 02:48:43.605819   92878 tuned.go:486] profile "ip-10-0-153-196.ec2.internal" changed, tuned profile requested: openshift-control-plane\nI0408 02:48:44.134101   92878 tuned.go:428] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Apr 08 02:51:34.433 E ns/openshift-multus pod/multus-admission-controller-px8s8 node/ip-10-0-153-196.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 02:51:34.475 E ns/openshift-sdn pod/ovs-bfrhj node/ip-10-0-153-196.ec2.internal container/openvswitch container exited with code 143 (Error): 05bab on port 11\n2020-04-08T02:48:56.959Z|00187|bridge|INFO|bridge br0: added interface veth1f73bbc1 on port 87\n2020-04-08T02:48:57.021Z|00188|connmgr|INFO|br0<->unix#949: 5 flow_mods in the last 0 s (5 adds)\n2020-04-08T02:48:57.095Z|00189|connmgr|INFO|br0<->unix#953: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:48:57.102Z|00190|connmgr|INFO|br0<->unix#955: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-08T02:48:58.239Z|00191|connmgr|INFO|br0<->unix#958: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:48:58.283Z|00192|connmgr|INFO|br0<->unix#961: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:48:58.328Z|00193|bridge|INFO|bridge br0: deleted interface vethab7f6f2e on port 71\n2020-04-08T02:48:59.042Z|00194|bridge|INFO|bridge br0: added interface vethbaad012a on port 88\n2020-04-08T02:48:59.094Z|00195|connmgr|INFO|br0<->unix#964: 5 flow_mods in the last 0 s (5 adds)\n2020-04-08T02:48:59.194Z|00196|connmgr|INFO|br0<->unix#968: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:48:59.195Z|00197|connmgr|INFO|br0<->unix#970: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-08T02:49:00.626Z|00198|connmgr|INFO|br0<->unix#976: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:49:00.668Z|00199|connmgr|INFO|br0<->unix#979: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:49:00.712Z|00200|bridge|INFO|bridge br0: deleted interface veth1f73bbc1 on port 87\n2020-04-08T02:49:02.644Z|00201|connmgr|INFO|br0<->unix#982: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:49:02.693Z|00202|connmgr|INFO|br0<->unix#985: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:49:02.720Z|00203|bridge|INFO|bridge br0: deleted interface vethbaad012a on port 88\n2020-04-08T02:49:02.948Z|00204|connmgr|INFO|br0<->unix#988: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T02:49:02.980Z|00205|connmgr|INFO|br0<->unix#991: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T02:49:03.005Z|00206|bridge|INFO|bridge br0: deleted interface vethef0ee6f2 on port 75\n2020-04-08 02:49:13 info: Saving flows ...\n
Apr 08 02:51:34.497 E ns/openshift-machine-config-operator pod/machine-config-daemon-pr76f node/ip-10-0-153-196.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 02:51:34.520 E ns/openshift-machine-config-operator pod/machine-config-server-dfv6m node/ip-10-0-153-196.ec2.internal container/machine-config-server container exited with code 2 (Error): I0408 02:41:32.148337       1 start.go:38] Version: machine-config-daemon-4.5.0-202004071701-2-gdd5eeeb2-dirty (dd5eeeb2bf88c50c9b7c2aa2385c4b2078a9eea0)\nI0408 02:41:32.149606       1 api.go:51] Launching server on :22624\nI0408 02:41:32.149638       1 api.go:51] Launching server on :22623\n
Apr 08 02:51:34.548 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-153-196.ec2.internal node/ip-10-0-153-196.ec2.internal container/kube-scheduler container exited with code 2 (Error): , 2 nodes were found feasible.\nI0408 02:48:58.146027       1 scheduler.go:728] pod openshift-operator-lifecycle-manager/packageserver-bfbcff6b9-gstsn is bound successfully on node "ip-10-0-134-80.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0408 02:48:58.442450       1 scheduler.go:728] pod openshift-marketplace/redhat-marketplace-6bf8b96879-w4zbm is bound successfully on node "ip-10-0-129-68.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0408 02:48:59.276046       1 scheduler.go:728] pod openshift-marketplace/redhat-operators-bbcf974c8-wh7zj is bound successfully on node "ip-10-0-129-68.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0408 02:48:59.861629       1 scheduler.go:728] pod openshift-marketplace/redhat-operators-7df58ddfbd-nxlk6 is bound successfully on node "ip-10-0-129-68.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0408 02:49:02.407413       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6496b44c7-dqw4g: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 02:49:04.391974       1 factory.go:462] Unable to schedule openshift-apiserver/apiserver-544c884d8d-wj2wj: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 02:49:06.270289       1 scheduler.go:728] pod openshift-operator-lifecycle-manager/packageserver-bfbcff6b9-zsgld is bound successfully on node "ip-10-0-135-223.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0408 02:49:08.335658       1 scheduler.go:728] pod openshift-authentication/oauth-openshift-79dd764bc-bzhpd is bound successfully on node "ip-10-0-135-223.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible.\n
Apr 08 02:51:34.548 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-153-196.ec2.internal node/ip-10-0-153-196.ec2.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:48:52.789154       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:48:52.789190       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:48:54.802006       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:48:54.802124       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:48:56.814672       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:48:56.814713       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:48:58.841249       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:48:58.841823       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:49:00.856379       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:49:00.856403       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:49:02.884106       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:49:02.884279       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:49:04.893974       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:49:04.894005       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:49:06.919604       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:49:06.919644       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:49:08.991086       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:49:08.991122       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 02:49:11.442895       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 02:49:11.442930       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 08 02:51:34.579 E ns/openshift-multus pod/multus-m2twg node/ip-10-0-153-196.ec2.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 02:51:34.600 E ns/openshift-controller-manager pod/controller-manager-vct9f node/ip-10-0-153-196.ec2.internal container/controller-manager container exited with code 1 (Error): out=8m11s&timeoutSeconds=491&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0408 02:48:31.762116       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 85; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 02:48:31.762217       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 7; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 02:48:31.762299       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 39; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 02:48:31.762358       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 53; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 02:48:31.762415       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 9; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 02:48:31.762470       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 27; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 08 02:51:34.653 E ns/openshift-sdn pod/sdn-controller-8vz6j node/ip-10-0-153-196.ec2.internal container/sdn-controller container exited with code 2 (Error): I0408 02:32:12.915553       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 08 02:51:34.672 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-196.ec2.internal node/ip-10-0-153-196.ec2.internal container/kube-apiserver container exited with code 1 (Error): nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0408 02:49:13.460440       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0408 02:49:13.460474       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0408 02:49:13.460540       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0408 02:49:13.460608       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0408 02:49:13.460701       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0408 02:49:13.460749       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\nW0408 02:49:13.460769       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://localhost:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:2379: connect: connection refused". Reconnecting...\n
Apr 08 02:51:34.672 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-196.ec2.internal node/ip-10-0-153-196.ec2.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0408 02:25:56.538732       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 08 02:51:34.672 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-196.ec2.internal node/ip-10-0-153-196.ec2.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 02:48:54.927386       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:48:54.928154       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 02:49:04.961449       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 02:49:04.961885       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 08 02:51:37.946 E ns/openshift-etcd pod/etcd-ip-10-0-153-196.ec2.internal node/ip-10-0-153-196.ec2.internal container/etcd-metrics container exited with code 2 (Error): ll-serving-metrics/etcd-serving-metrics-ip-10-0-153-196.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-153-196.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T02:23:27.608Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T02:23:27.609Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-153-196.ec2.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-153-196.ec2.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"warn","ts":"2020-04-08T02:23:27.610Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.153.196:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.153.196:9978: connect: connection refused\". Reconnecting..."}\n{"level":"info","ts":"2020-04-08T02:23:27.614Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-08T02:23:27.614Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T02:23:27.614Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-08T02:23:28.612Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.153.196:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.153.196:9978: connect: connection refused\". Reconnecting..."}\n
Apr 08 02:51:46.887 E ns/openshift-machine-config-operator pod/machine-config-daemon-pr76f node/ip-10-0-153-196.ec2.internal container/oauth-proxy container exited with code 1 (Error):