ResultSUCCESS
Tests 2 failed / 23 succeeded
Started2020-04-05 16:42
Elapsed1h23m
Work namespaceci-op-c6lxz8nf
Refs openshift-4.5:fe90dcbe
43:ac96b104
pod614ee2cd-775c-11ea-a109-0a58ac105bdd
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Kubernetes and OpenShift APIs remain available 41m2s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 10s of 41m1s (0%):

Apr 05 17:42:28.961 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: unexpected EOF
Apr 05 17:42:28.961 E kube-apiserver Kube API started failing: Get https://api.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: unexpected EOF
Apr 05 17:42:29.925 E kube-apiserver Kube API is not responding to GET requests
Apr 05 17:42:29.925 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 05 17:42:30.639 I kube-apiserver Kube API started responding to GET requests
Apr 05 17:42:30.668 I openshift-apiserver OpenShift API started responding to GET requests
Apr 05 17:45:37.135 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: unexpected EOF
Apr 05 17:45:37.136 E kube-apiserver Kube API started failing: Get https://api.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: unexpected EOF
Apr 05 17:45:37.925 E openshift-apiserver OpenShift API is not responding to GET requests
Apr 05 17:45:37.925 - 5s    E kube-apiserver Kube API is not responding to GET requests
Apr 05 17:45:38.204 I openshift-apiserver OpenShift API started responding to GET requests
Apr 05 17:45:44.410 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1586109321.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 41m36s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
182 error level events were detected during this test run:

Apr 05 17:19:00.149 E ns/openshift-machine-api pod/machine-api-operator-64dbc6dcbf-xwbfv node/ip-10-0-153-86.us-west-2.compute.internal container/machine-api-operator container exited with code 2 (Error): 
Apr 05 17:20:48.137 E ns/openshift-machine-api pod/machine-api-controllers-68648f95c8-2944q node/ip-10-0-128-200.us-west-2.compute.internal container/machineset-controller container exited with code 1 (Error): 
Apr 05 17:20:55.080 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Apr 05 17:21:37.698 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: ip-10-0-128-200.us-west-2.compute.internal,ip-10-0-153-86.us-west-2.compute.internal members are unhealthy,  members are unknown
Apr 05 17:23:12.682 E ns/openshift-cluster-machine-approver pod/machine-approver-96595dc6b-fq49g node/ip-10-0-142-68.us-west-2.compute.internal container/machine-approver-controller container exited with code 2 (Error): :146] CSR csr-rx8ms added\nI0405 17:06:34.420991       1 csr_check.go:418] retrieving serving cert from ip-10-0-159-148.us-west-2.compute.internal (10.0.159.148:10250)\nW0405 17:06:34.422653       1 csr_check.go:178] Failed to retrieve current serving cert: remote error: tls: internal error\nI0405 17:06:34.422674       1 csr_check.go:183] Falling back to machine-api authorization for ip-10-0-159-148.us-west-2.compute.internal\nI0405 17:06:34.427562       1 main.go:196] CSR csr-rx8ms approved\nI0405 17:06:44.769637       1 main.go:146] CSR csr-hwl7z added\nI0405 17:06:44.790498       1 csr_check.go:418] retrieving serving cert from ip-10-0-130-4.us-west-2.compute.internal (10.0.130.4:10250)\nW0405 17:06:44.791405       1 csr_check.go:178] Failed to retrieve current serving cert: remote error: tls: internal error\nI0405 17:06:44.791422       1 csr_check.go:183] Falling back to machine-api authorization for ip-10-0-130-4.us-west-2.compute.internal\nI0405 17:06:44.798219       1 main.go:196] CSR csr-hwl7z approved\nE0405 17:12:10.768175       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=16999&timeoutSeconds=431&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0405 17:18:58.911231       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=21847&timeoutSeconds=506&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0405 17:18:59.911766       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Apr 05 17:23:26.975 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-6dd6c5dd85-vw7g9 node/ip-10-0-131-140.us-west-2.compute.internal container/operator container exited with code 255 (Error): 5 17:23:15.801390       1 operator.go:147] Finished syncing operator at 17.267153ms\nI0405 17:23:15.804692       1 operator.go:145] Starting syncing operator at 2020-04-05 17:23:15.804684021 +0000 UTC m=+927.157838192\nI0405 17:23:15.820914       1 operator.go:147] Finished syncing operator at 16.222217ms\nI0405 17:23:15.820962       1 operator.go:145] Starting syncing operator at 2020-04-05 17:23:15.820955693 +0000 UTC m=+927.174109957\nI0405 17:23:15.850240       1 operator.go:147] Finished syncing operator at 29.27702ms\nI0405 17:23:15.857089       1 operator.go:145] Starting syncing operator at 2020-04-05 17:23:15.857082638 +0000 UTC m=+927.210236630\nI0405 17:23:16.193524       1 operator.go:147] Finished syncing operator at 336.433618ms\nI0405 17:23:19.898209       1 operator.go:145] Starting syncing operator at 2020-04-05 17:23:19.898197601 +0000 UTC m=+931.251351574\nI0405 17:23:19.923198       1 operator.go:147] Finished syncing operator at 24.991719ms\nI0405 17:23:19.923262       1 operator.go:145] Starting syncing operator at 2020-04-05 17:23:19.923256814 +0000 UTC m=+931.276410748\nI0405 17:23:19.938812       1 operator.go:147] Finished syncing operator at 15.549664ms\nI0405 17:23:26.075594       1 operator.go:145] Starting syncing operator at 2020-04-05 17:23:26.075579406 +0000 UTC m=+937.428733749\nI0405 17:23:26.092273       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0405 17:23:26.093185       1 logging_controller.go:93] Shutting down LogLevelController\nI0405 17:23:26.093214       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0405 17:23:26.093229       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nF0405 17:23:26.093298       1 builder.go:243] stopped\nF0405 17:23:26.094282       1 builder.go:210] server exited\nI0405 17:23:26.095062       1 secure_serving.go:222] Stopped listening on [::]:8443\nI0405 17:23:26.095079       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\n
Apr 05 17:23:30.987 E ns/openshift-monitoring pod/node-exporter-6m8d9 node/ip-10-0-131-140.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -05T17:06:55Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-05T17:06:55Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 05 17:23:40.020 E ns/openshift-monitoring pod/kube-state-metrics-5c5dd8496b-trscx node/ip-10-0-131-140.us-west-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 05 17:23:42.039 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-131-140.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/04/05 17:09:08 Watching directory: "/etc/alertmanager/config"\n
Apr 05 17:23:42.039 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-131-140.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/05 17:09:09 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/05 17:09:09 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/05 17:09:09 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/05 17:09:09 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/05 17:09:09 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/05 17:09:09 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/05 17:09:09 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/05 17:09:09 http.go:107: HTTPS: listening on [::]:9095\nI0405 17:09:09.188642       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 05 17:23:43.040 E ns/openshift-monitoring pod/openshift-state-metrics-57dd57976d-vwzs5 node/ip-10-0-131-140.us-west-2.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
Apr 05 17:23:47.072 E ns/openshift-service-ca-operator pod/service-ca-operator-f48494988-xgvfk node/ip-10-0-142-68.us-west-2.compute.internal container/operator container exited with code 1 (Error): 
Apr 05 17:23:52.188 E ns/openshift-monitoring pod/telemeter-client-68bc5bd547-szwbr node/ip-10-0-159-148.us-west-2.compute.internal container/reload container exited with code 2 (Error): 
Apr 05 17:23:52.188 E ns/openshift-monitoring pod/telemeter-client-68bc5bd547-szwbr node/ip-10-0-159-148.us-west-2.compute.internal container/telemeter-client container exited with code 2 (Error): 
Apr 05 17:23:55.197 E ns/openshift-monitoring pod/prometheus-adapter-5b47d5d5fc-fhnk5 node/ip-10-0-159-148.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0405 17:09:19.032735       1 adapter.go:93] successfully using in-cluster auth\nI0405 17:09:20.127460       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 05 17:24:00.780 E ns/openshift-controller-manager pod/controller-manager-l6d6s node/ip-10-0-128-200.us-west-2.compute.internal container/controller-manager container exited with code 137 (Error): I0405 17:03:43.489880       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0405 17:03:43.491059       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-c6lxz8nf/stable-initial@sha256:0de8d8c1ed3a3bfbc6cbe0b54e060b47bdd23df5bb4e15d894f2eadc2790c9ee"\nI0405 17:03:43.491077       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-c6lxz8nf/stable-initial@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0405 17:03:43.491147       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0405 17:03:43.491446       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 05 17:24:32.254 E ns/openshift-console-operator pod/console-operator-5d766c7659-drlkt node/ip-10-0-153-86.us-west-2.compute.internal container/console-operator container exited with code 1 (Error):  ManagementStateController ...\nI0405 17:24:31.629716       1 base_controller.go:101] Shutting down UnsupportedConfigOverridesController ...\nI0405 17:24:31.629725       1 controller.go:115] shutting down ConsoleResourceSyncDestinationController\nI0405 17:24:31.629735       1 controller.go:181] shutting down ConsoleRouteSyncController\nI0405 17:24:31.629738       1 reflector.go:181] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206\nI0405 17:24:31.629747       1 base_controller.go:101] Shutting down StatusSyncer_console ...\nI0405 17:24:31.629758       1 base_controller.go:101] Shutting down ResourceSyncController ...\nI0405 17:24:31.629758       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0405 17:24:31.629767       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0405 17:24:31.629772       1 base_controller.go:101] Shutting down LoggingSyncer ...\nI0405 17:24:31.629810       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0405 17:24:31.629826       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0405 17:24:31.629828       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController\nI0405 17:24:31.629871       1 secure_serving.go:222] Stopped listening on [::]:8443\nI0405 17:24:31.629893       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0405 17:24:31.629907       1 base_controller.go:58] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0405 17:24:31.629913       1 base_controller.go:48] All UnsupportedConfigOverridesController workers have been terminated\nW0405 17:24:31.629916       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\n
Apr 05 17:24:32.951 E ns/openshift-monitoring pod/node-exporter-xbfvb node/ip-10-0-128-200.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -05T17:03:36Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-05T17:03:36Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 05 17:24:37.275 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-159-148.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-05T17:24:07.362Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-05T17:24:07.367Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-05T17:24:07.381Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-05T17:24:07.382Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-05T17:24:07.382Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-05T17:24:07.382Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-05T17:24:07.382Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-05T17:24:07.382Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-05T17:24:07.382Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-05T17:24:07.382Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-05T17:24:07.383Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-05T17:24:07.383Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-05T17:24:07.383Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-05T17:24:07.383Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-05T17:24:07.384Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-05T17:24:07.384Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-05
Apr 05 17:24:38.101 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-130-4.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/04/05 17:09:30 Watching directory: "/etc/alertmanager/config"\n
Apr 05 17:24:38.101 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-130-4.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/05 17:09:30 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/05 17:09:30 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/05 17:09:30 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/05 17:09:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/05 17:09:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/05 17:09:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/05 17:09:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/05 17:09:30 http.go:107: HTTPS: listening on [::]:9095\nI0405 17:09:30.936690       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 05 17:24:39.100 E ns/openshift-monitoring pod/thanos-querier-67cff44c79-q8k79 node/ip-10-0-130-4.us-west-2.compute.internal container/oauth-proxy container exited with code 2 (Error): 2020/04/05 17:10:03 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/05 17:10:03 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/05 17:10:03 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/05 17:10:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/05 17:10:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/05 17:10:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/05 17:10:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/05 17:10:03 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0405 17:10:03.385828       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/05 17:10:03 http.go:107: HTTPS: listening on [::]:9091\n
Apr 05 17:24:52.978 E ns/openshift-service-ca pod/service-ca-b8c65b8d6-7z7hd node/ip-10-0-128-200.us-west-2.compute.internal container/service-ca-controller container exited with code 1 (Error): 
Apr 05 17:24:55.455 E ns/openshift-monitoring pod/node-exporter-ptwtj node/ip-10-0-142-68.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -05T17:04:23Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-05T17:04:23Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 05 17:24:58.156 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5dc8545f6f-rqrfp node/ip-10-0-130-4.us-west-2.compute.internal container/snapshot-controller container exited with code 2 (Error): 
Apr 05 17:25:06.499 E ns/openshift-marketplace pod/certified-operators-67cfcc5f8b-lpjfd node/ip-10-0-131-140.us-west-2.compute.internal container/certified-operators container exited with code 2 (Error): 
Apr 05 17:25:14.567 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-140.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-05T17:25:08.250Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-05T17:25:08.255Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-05T17:25:08.256Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-05T17:25:08.257Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-05T17:25:08.257Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-05T17:25:08.257Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-05T17:25:08.257Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-05T17:25:08.257Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-05T17:25:08.257Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-05T17:25:08.257Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-05T17:25:08.257Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-05T17:25:08.257Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-05T17:25:08.257Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-05T17:25:08.257Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-05T17:25:08.258Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-05T17:25:08.258Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-05
Apr 05 17:25:40.606 E ns/openshift-console pod/console-7794557d75-pbms6 node/ip-10-0-142-68.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-04-05T17:09:07Z cmd/main: cookies are secure!\n2020-04-05T17:09:07Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-05T17:09:17Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-05T17:09:27Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-05T17:09:37Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-05T17:09:47Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-05T17:09:57Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-05T17:10:07Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-05T17:10:17Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-05T17:10:27Z cmd/main: Binding to [::]:8443...\n2020-04-05T17:10:27Z cmd/main: using TLS\n
Apr 05 17:26:42.368 E ns/openshift-sdn pod/sdn-xd8j8 node/ip-10-0-130-4.us-west-2.compute.internal container/sdn container exited with code 255 (Error): rade-2488/dp-57bb6bd67b-54lgg got IP 10.129.2.17, ofport 18\nI0405 17:14:01.633265    2190 pod.go:503] CNI_ADD e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-3755/pod-configmap-b87d97a8-072e-41f1-9c09-267225b961ff got IP 10.129.2.18, ofport 19\nI0405 17:14:04.354443    2190 pod.go:539] CNI_DEL e2e-k8s-sig-apps-deployment-upgrade-2488/dp-7659d966b5-vd592\nI0405 17:14:05.623475    2190 pod.go:539] CNI_DEL e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-3755/pod-configmap-b87d97a8-072e-41f1-9c09-267225b961ff\nI0405 17:23:13.058883    2190 pod.go:503] CNI_ADD openshift-kube-storage-version-migrator/migrator-6d558994df-j7b2f got IP 10.129.2.19, ofport 20\nI0405 17:23:20.290695    2190 pod.go:503] CNI_ADD openshift-console/downloads-589bc49b8d-wxpvs got IP 10.129.2.20, ofport 21\nI0405 17:23:38.880864    2190 pod.go:503] CNI_ADD openshift-image-registry/image-registry-697f44c76d-z6pwh got IP 10.129.2.21, ofport 22\nI0405 17:24:01.337854    2190 pod.go:539] CNI_DEL openshift-image-registry/node-ca-wt6b4\nI0405 17:24:01.521372    2190 pod.go:539] CNI_DEL openshift-monitoring/prometheus-adapter-5b47d5d5fc-crx2r\nI0405 17:24:05.775917    2190 pod.go:503] CNI_ADD openshift-ingress/router-default-56f5986847-vswxb got IP 10.129.2.22, ofport 23\nI0405 17:24:13.450588    2190 pod.go:503] CNI_ADD openshift-image-registry/node-ca-g5w24 got IP 10.129.2.23, ofport 24\nI0405 17:24:37.855909    2190 pod.go:539] CNI_DEL openshift-monitoring/alertmanager-main-0\nI0405 17:24:38.413280    2190 pod.go:539] CNI_DEL openshift-monitoring/thanos-querier-67cff44c79-q8k79\nI0405 17:24:41.348848    2190 pod.go:539] CNI_DEL openshift-monitoring/prometheus-k8s-0\nI0405 17:24:42.684646    2190 pod.go:503] CNI_ADD openshift-monitoring/alertmanager-main-0 got IP 10.129.2.24, ofport 25\nI0405 17:24:57.733733    2190 pod.go:539] CNI_DEL openshift-cluster-storage-operator/csi-snapshot-controller-5dc8545f6f-rqrfp\nF0405 17:26:41.460706    2190 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 05 17:26:46.806 E ns/openshift-sdn pod/sdn-controller-h2gbf node/ip-10-0-142-68.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): I0405 16:54:54.545185       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0405 17:01:57.156794       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 05 17:26:52.741 E ns/openshift-sdn pod/sdn-controller-gm499 node/ip-10-0-153-86.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): treamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:08:28.520468       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:08:28.520481       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:08:28.520494       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:12:10.604716       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:12:10.605277       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:12:10.606536       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:12:10.606817       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:13:48.430892       1 vnids.go:115] Allocated netid 13211142 for namespace "e2e-frontend-ingress-available-4970"\nI0405 17:13:48.443496       1 vnids.go:115] Allocated netid 15318454 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-3755"\nI0405 17:13:48.452665       1 vnids.go:115] Allocated netid 1836538 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-2636"\nI0405 17:13:48.460552       1 vnids.go:115] Allocated netid 3344501 for namespace "e2e-k8s-sig-apps-job-upgrade-7727"\nI0405 17:13:48.469031       1 vnids.go:115] Allocated netid 4642731 for namespace "e2e-k8s-service-lb-available-118"\nI0405 17:13:48.480678       1 vnids.go:115] Allocated netid 9735139 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-7187"\nI0405 17:13:48.518005       1 vnids.go:115] Allocated netid 1126048 for namespace "e2e-k8s-sig-apps-deployment-upgrade-2488"\nI0405 17:13:48.544089       1 vnids.go:115] Allocated netid 1129795 for namespace "e2e-control-plane-available-7861"\nI0405 17:13:48.563826       1 vnids.go:115] Allocated netid 3472236 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-1975"\n
Apr 05 17:26:59.702 E ns/openshift-multus pod/multus-nlvbm node/ip-10-0-159-148.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 05 17:27:12.887 E ns/openshift-sdn pod/sdn-9smvr node/ip-10-0-142-68.us-west-2.compute.internal container/sdn container exited with code 255 (Error): -apiserver-operator-8c4f8cf86-d8cqz\nI0405 17:23:29.252996    1982 pod.go:539] CNI_DEL openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator-5d96xsd2s\nI0405 17:23:31.556342    1982 pod.go:503] CNI_ADD openshift-marketplace/marketplace-operator-844d6dd967-26hwj got IP 10.129.0.72, ofport 73\nI0405 17:23:33.358899    1982 pod.go:503] CNI_ADD openshift-authentication/oauth-openshift-7cf9d44b64-skzpr got IP 10.129.0.73, ofport 74\nI0405 17:23:35.250816    1982 pod.go:539] CNI_DEL openshift-authentication-operator/authentication-operator-c8444894f-wnpwm\nI0405 17:23:42.444537    1982 pod.go:503] CNI_ADD openshift-authentication/oauth-openshift-7c79f69469-zjk7r got IP 10.129.0.74, ofport 75\nI0405 17:23:42.563734    1982 pod.go:539] CNI_DEL openshift-operator-lifecycle-manager/olm-operator-7944777b44-f8h5v\nI0405 17:23:46.355413    1982 pod.go:539] CNI_DEL openshift-service-ca-operator/service-ca-operator-f48494988-xgvfk\nI0405 17:23:53.460534    1982 pod.go:539] CNI_DEL openshift-operator-lifecycle-manager/catalog-operator-66b549dc57-ldzds\nI0405 17:23:59.985376    1982 pod.go:539] CNI_DEL openshift-controller-manager/controller-manager-gsnqh\nI0405 17:24:03.802415    1982 pod.go:503] CNI_ADD openshift-operator-lifecycle-manager/packageserver-cdc5cb8b4-zx5vq got IP 10.129.0.75, ofport 76\nI0405 17:24:06.975500    1982 pod.go:539] CNI_DEL openshift-authentication/oauth-openshift-7cf9d44b64-skzpr\nI0405 17:24:12.567931    1982 pod.go:503] CNI_ADD openshift-controller-manager/controller-manager-p7jrb got IP 10.129.0.76, ofport 77\nI0405 17:24:40.218015    1982 pod.go:539] CNI_DEL openshift-image-registry/node-ca-kbk4h\nI0405 17:24:52.034239    1982 pod.go:503] CNI_ADD openshift-image-registry/node-ca-69q5s got IP 10.129.0.77, ofport 78\nI0405 17:25:39.719911    1982 pod.go:539] CNI_DEL openshift-console/console-7794557d75-pbms6\nF0405 17:27:11.976833    1982 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 05 17:27:33.901 E ns/openshift-sdn pod/sdn-585ww node/ip-10-0-159-148.us-west-2.compute.internal container/sdn container exited with code 255 (Error): I0405 17:26:42.111443   66015 node.go:146] Initializing SDN node "ip-10-0-159-148.us-west-2.compute.internal" (10.0.159.148) of type "redhat/openshift-ovs-networkpolicy"\nI0405 17:26:42.116303   66015 cmd.go:151] Starting node networking (unknown)\nI0405 17:26:42.232061   66015 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0405 17:26:42.335395   66015 proxy.go:103] Using unidling+iptables Proxier.\nI0405 17:26:42.335770   66015 proxy.go:129] Tearing down userspace rules.\nI0405 17:26:42.347240   66015 networkpolicy.go:330] SyncVNIDRules: 2 unused VNIDs\nI0405 17:26:42.548693   66015 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0405 17:26:42.555067   66015 config.go:313] Starting service config controller\nI0405 17:26:42.555097   66015 shared_informer.go:197] Waiting for caches to sync for service config\nI0405 17:26:42.555118   66015 config.go:131] Starting endpoints config controller\nI0405 17:26:42.555134   66015 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0405 17:26:42.555170   66015 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0405 17:26:42.655857   66015 shared_informer.go:204] Caches are synced for endpoints config \nI0405 17:26:42.655981   66015 shared_informer.go:204] Caches are synced for service config \nF0405 17:27:32.805324   66015 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 05 17:27:55.098 E ns/openshift-sdn pod/sdn-lt58d node/ip-10-0-153-86.us-west-2.compute.internal container/sdn container exited with code 255 (Error): I0405 17:27:19.111246   95114 node.go:146] Initializing SDN node "ip-10-0-153-86.us-west-2.compute.internal" (10.0.153.86) of type "redhat/openshift-ovs-networkpolicy"\nI0405 17:27:19.115392   95114 cmd.go:151] Starting node networking (unknown)\nI0405 17:27:19.223770   95114 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0405 17:27:19.321194   95114 proxy.go:103] Using unidling+iptables Proxier.\nI0405 17:27:19.321537   95114 proxy.go:129] Tearing down userspace rules.\nI0405 17:27:19.332389   95114 networkpolicy.go:330] SyncVNIDRules: 12 unused VNIDs\nI0405 17:27:19.535133   95114 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0405 17:27:19.542752   95114 config.go:313] Starting service config controller\nI0405 17:27:19.542784   95114 shared_informer.go:197] Waiting for caches to sync for service config\nI0405 17:27:19.542938   95114 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0405 17:27:19.543662   95114 config.go:131] Starting endpoints config controller\nI0405 17:27:19.543683   95114 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0405 17:27:19.642908   95114 shared_informer.go:204] Caches are synced for service config \nI0405 17:27:19.643822   95114 shared_informer.go:204] Caches are synced for endpoints config \nF0405 17:27:54.035288   95114 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 05 17:27:58.005 E ns/openshift-multus pod/multus-65hjq node/ip-10-0-131-140.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 05 17:28:38.145 E ns/openshift-sdn pod/sdn-j2l9n node/ip-10-0-131-140.us-west-2.compute.internal container/sdn container exited with code 255 (Error): I0405 17:27:51.544604   78156 node.go:146] Initializing SDN node "ip-10-0-131-140.us-west-2.compute.internal" (10.0.131.140) of type "redhat/openshift-ovs-networkpolicy"\nI0405 17:27:51.549554   78156 cmd.go:151] Starting node networking (unknown)\nI0405 17:27:51.721285   78156 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0405 17:27:51.839393   78156 proxy.go:103] Using unidling+iptables Proxier.\nI0405 17:27:51.839802   78156 proxy.go:129] Tearing down userspace rules.\nI0405 17:27:51.856790   78156 networkpolicy.go:330] SyncVNIDRules: 2 unused VNIDs\nI0405 17:27:52.062748   78156 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0405 17:27:52.068959   78156 config.go:313] Starting service config controller\nI0405 17:27:52.068981   78156 config.go:131] Starting endpoints config controller\nI0405 17:27:52.068992   78156 shared_informer.go:197] Waiting for caches to sync for service config\nI0405 17:27:52.069006   78156 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0405 17:27:52.069171   78156 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0405 17:27:52.169192   78156 shared_informer.go:204] Caches are synced for service config \nI0405 17:27:52.169195   78156 shared_informer.go:204] Caches are synced for endpoints config \nF0405 17:28:37.734318   78156 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 05 17:28:38.299 E ns/openshift-multus pod/multus-admission-controller-95g72 node/ip-10-0-153-86.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 05 17:29:00.791 E ns/openshift-multus pod/multus-87br2 node/ip-10-0-130-4.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 05 17:30:13.512 E ns/openshift-multus pod/multus-cvz4f node/ip-10-0-142-68.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 05 17:31:07.170 E ns/openshift-multus pod/multus-66pxp node/ip-10-0-128-200.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 05 17:31:54.836 E ns/openshift-multus pod/multus-rgc96 node/ip-10-0-153-86.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 05 17:34:33.801 E ns/openshift-machine-config-operator pod/machine-config-daemon-vr7s4 node/ip-10-0-159-148.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:34:59.773 E ns/openshift-machine-config-operator pod/machine-config-daemon-jdf6b node/ip-10-0-128-200.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:35:16.431 E ns/openshift-machine-config-operator pod/machine-config-daemon-65km2 node/ip-10-0-153-86.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:35:26.415 E ns/openshift-machine-config-operator pod/machine-config-daemon-98294 node/ip-10-0-130-4.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:35:36.990 E ns/openshift-machine-config-operator pod/machine-config-daemon-qgfw4 node/ip-10-0-131-140.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:35:57.940 E ns/openshift-machine-config-operator pod/machine-config-controller-6f94d9b494-5hzkw node/ip-10-0-128-200.us-west-2.compute.internal container/machine-config-controller container exited with code 2 (Error): ving resource lock openshift-machine-config-operator/machine-config-controller: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config-controller: unexpected EOF\nI0405 17:07:55.729272       1 node_controller.go:452] Pool worker: node ip-10-0-159-148.us-west-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-93843d2bb37431550a996c3bbff220e2\nI0405 17:07:55.729351       1 node_controller.go:452] Pool worker: node ip-10-0-159-148.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-93843d2bb37431550a996c3bbff220e2\nI0405 17:07:55.729379       1 node_controller.go:452] Pool worker: node ip-10-0-159-148.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0405 17:08:05.759790       1 node_controller.go:452] Pool worker: node ip-10-0-130-4.us-west-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-93843d2bb37431550a996c3bbff220e2\nI0405 17:08:05.759814       1 node_controller.go:452] Pool worker: node ip-10-0-130-4.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-93843d2bb37431550a996c3bbff220e2\nI0405 17:08:05.759820       1 node_controller.go:452] Pool worker: node ip-10-0-130-4.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0405 17:08:12.095508       1 node_controller.go:452] Pool worker: node ip-10-0-131-140.us-west-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-93843d2bb37431550a996c3bbff220e2\nI0405 17:08:12.095535       1 node_controller.go:452] Pool worker: node ip-10-0-131-140.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-93843d2bb37431550a996c3bbff220e2\nI0405 17:08:12.095540       1 node_controller.go:452] Pool worker: node ip-10-0-131-140.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\n
Apr 05 17:38:02.979 E ns/openshift-machine-config-operator pod/machine-config-server-8l8q4 node/ip-10-0-153-86.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0405 16:58:59.931677       1 start.go:38] Version: machine-config-daemon-4.5.0-202004051201-2-ga195251a-dirty (a195251a12c0e8f9d9994c2662d3eb31c0a50eb1)\nI0405 16:58:59.932730       1 api.go:51] Launching server on :22624\nI0405 16:58:59.933373       1 api.go:51] Launching server on :22623\n
Apr 05 17:38:09.605 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-764b75c457-f7fj9 node/ip-10-0-159-148.us-west-2.compute.internal container/operator container exited with code 255 (Error):  m=+789.537714666\nI0405 17:36:34.729451       1 operator.go:147] Finished syncing operator at 18.537181ms\nI0405 17:36:34.729502       1 operator.go:145] Starting syncing operator at 2020-04-05 17:36:34.72949496 +0000 UTC m=+789.556303067\nI0405 17:36:34.748694       1 operator.go:147] Finished syncing operator at 19.191851ms\nI0405 17:36:35.602259       1 operator.go:145] Starting syncing operator at 2020-04-05 17:36:35.602249286 +0000 UTC m=+790.429057275\nI0405 17:36:35.622445       1 operator.go:147] Finished syncing operator at 20.186884ms\nI0405 17:36:35.700315       1 operator.go:145] Starting syncing operator at 2020-04-05 17:36:35.700302056 +0000 UTC m=+790.527110324\nI0405 17:36:35.720193       1 operator.go:147] Finished syncing operator at 19.879796ms\nI0405 17:36:35.800457       1 operator.go:145] Starting syncing operator at 2020-04-05 17:36:35.800446759 +0000 UTC m=+790.627254772\nI0405 17:36:35.820338       1 operator.go:147] Finished syncing operator at 19.88448ms\nI0405 17:36:35.899591       1 operator.go:145] Starting syncing operator at 2020-04-05 17:36:35.899577325 +0000 UTC m=+790.726385607\nI0405 17:36:36.320467       1 operator.go:147] Finished syncing operator at 420.879124ms\nI0405 17:38:07.091822       1 operator.go:145] Starting syncing operator at 2020-04-05 17:38:07.091807821 +0000 UTC m=+881.918616374\nI0405 17:38:07.142206       1 operator.go:147] Finished syncing operator at 50.384701ms\nI0405 17:38:07.157180       1 operator.go:145] Starting syncing operator at 2020-04-05 17:38:07.157167226 +0000 UTC m=+881.983975562\nI0405 17:38:07.171861       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0405 17:38:07.172407       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0405 17:38:07.172425       1 logging_controller.go:93] Shutting down LogLevelController\nI0405 17:38:07.172440       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nF0405 17:38:07.172515       1 builder.go:243] stopped\n
Apr 05 17:38:09.645 E ns/openshift-monitoring pod/prometheus-adapter-7dd8bcb486-qsb5j node/ip-10-0-159-148.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0405 17:23:59.593159       1 adapter.go:93] successfully using in-cluster auth\nI0405 17:24:00.353122       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 05 17:38:09.667 E ns/openshift-marketplace pod/certified-operators-74666cdc8-jkdb9 node/ip-10-0-159-148.us-west-2.compute.internal container/certified-operators container exited with code 2 (Error): 
Apr 05 17:38:09.694 E ns/openshift-marketplace pod/redhat-marketplace-6bd9cdbc56-rck5w node/ip-10-0-159-148.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Apr 05 17:38:09.719 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-159-148.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/04/05 17:24:32 Watching directory: "/etc/alertmanager/config"\n
Apr 05 17:38:09.719 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-159-148.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/05 17:24:33 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/05 17:24:33 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/05 17:24:33 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/05 17:24:33 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/05 17:24:33 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/05 17:24:33 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/05 17:24:33 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0405 17:24:33.473700       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/05 17:24:33 http.go:107: HTTPS: listening on [::]:9095\n
Apr 05 17:38:09.796 E ns/openshift-monitoring pod/openshift-state-metrics-55594bd775-5n5ql node/ip-10-0-159-148.us-west-2.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
Apr 05 17:38:10.668 E ns/openshift-monitoring pod/kube-state-metrics-77d4765df-ml529 node/ip-10-0-159-148.us-west-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 05 17:38:10.759 E ns/openshift-marketplace pod/redhat-operators-6665ffcb4f-gtgdp node/ip-10-0-159-148.us-west-2.compute.internal container/redhat-operators container exited with code 2 (Error): 
Apr 05 17:38:10.809 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-159-148.us-west-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/04/05 17:24:28 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 05 17:38:10.809 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-159-148.us-west-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/04/05 17:24:33 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/05 17:24:33 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/05 17:24:33 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/05 17:24:33 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/05 17:24:33 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/05 17:24:33 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/05 17:24:33 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/05 17:24:33 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0405 17:24:33.743852       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/05 17:24:33 http.go:107: HTTPS: listening on [::]:9091\n2020/04/05 17:28:17 oauthproxy.go:774: basicauth: 10.131.0.23:45724 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/05 17:32:48 oauthproxy.go:774: basicauth: 10.131.0.23:50536 Authorization header does not start with 'Basic', skipping basic authentication\n2020/04/05 17:37:19 oauthproxy.go:774: basicauth: 10.131.0.23:55446 Authorization header does not start with 'Basic', skipping basic authentication\n
Apr 05 17:38:10.809 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-159-148.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-05T17:24:25.186724756Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-05T17:24:25.189212394Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-04-05T17:24:30.188450259Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-04-05T17:24:35.188434148Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-05T17:24:40.415771725Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-05T17:24:40.415871563Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Apr 05 17:38:16.869 E ns/openshift-machine-config-operator pod/machine-config-server-dccwl node/ip-10-0-142-68.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0405 16:56:21.556990       1 start.go:38] Version: machine-config-daemon-4.5.0-202004051201-2-ga195251a-dirty (a195251a12c0e8f9d9994c2662d3eb31c0a50eb1)\nI0405 16:56:21.557907       1 api.go:51] Launching server on :22624\nI0405 16:56:21.557951       1 api.go:51] Launching server on :22623\nI0405 17:04:08.050672       1 api.go:97] Pool worker requested by 10.0.133.194:7902\n
Apr 05 17:38:26.468 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-bb7bd4889-xd68k node/ip-10-0-131-140.us-west-2.compute.internal container/snapshot-controller container exited with code 2 (Error): 
Apr 05 17:38:32.848 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-130-4.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-05T17:38:27.985Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-05T17:38:27.988Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-05T17:38:27.989Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-05T17:38:27.990Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-05T17:38:27.990Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-05T17:38:27.990Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-05T17:38:27.990Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-05T17:38:27.990Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-05T17:38:27.990Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-05T17:38:27.990Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-05T17:38:27.990Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-05T17:38:27.990Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-05T17:38:27.990Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-05T17:38:27.990Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-05T17:38:27.991Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-05T17:38:27.991Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-05
Apr 05 17:40:13.398 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Apr 05 17:41:01.405 E ns/openshift-monitoring pod/node-exporter-bdsh5 node/ip-10-0-159-148.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -05T17:24:31Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:31Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 05 17:41:01.420 E ns/openshift-cluster-node-tuning-operator pod/tuned-nhtq5 node/ip-10-0-159-148.us-west-2.compute.internal container/tuned container exited with code 143 (Error): go:169] disabling system tuned...\nI0405 17:25:06.165156   59165 tuned.go:513] tuned "rendered" added\nI0405 17:25:06.165222   59165 tuned.go:218] extracting tuned profiles\nI0405 17:25:06.171823   59165 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0405 17:25:07.145275   59165 tuned.go:392] getting recommended profile...\nI0405 17:25:07.274520   59165 tuned.go:419] active profile () != recommended profile (openshift-node)\nI0405 17:25:07.274591   59165 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0405 17:25:07.274638   59165 tuned.go:285] starting tuned...\n2020-04-05 17:25:07,395 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-05 17:25:07,402 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-05 17:25:07,402 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-05 17:25:07,403 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-05 17:25:07,404 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-05 17:25:07,437 INFO     tuned.daemon.controller: starting controller\n2020-04-05 17:25:07,437 INFO     tuned.daemon.daemon: starting tuning\n2020-04-05 17:25:07,449 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-05 17:25:07,450 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-05 17:25:07,453 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-05 17:25:07,456 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-05 17:25:07,457 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-05 17:25:07,583 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-05 17:25:07,597 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Apr 05 17:41:01.467 E ns/openshift-multus pod/multus-p948b node/ip-10-0-159-148.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 05 17:41:01.487 E ns/openshift-sdn pod/ovs-x2lfn node/ip-10-0-159-148.us-west-2.compute.internal container/openvswitch container exited with code 143 (Error):  the last 0 s (4 deletes)\n2020-04-05T17:38:09.731Z|00103|bridge|INFO|bridge br0: deleted interface veth1085ed21 on port 26\n2020-04-05T17:38:09.794Z|00104|connmgr|INFO|br0<->unix#572: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:38:09.851Z|00105|connmgr|INFO|br0<->unix#575: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:38:09.887Z|00106|bridge|INFO|bridge br0: deleted interface veth489f4903 on port 32\n2020-04-05T17:38:09.934Z|00107|connmgr|INFO|br0<->unix#578: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:38:09.993Z|00108|connmgr|INFO|br0<->unix#581: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:38:10.020Z|00109|bridge|INFO|bridge br0: deleted interface vethead02aee on port 28\n2020-04-05T17:38:09.235Z|00011|jsonrpc|WARN|unix#479: receive error: Connection reset by peer\n2020-04-05T17:38:09.235Z|00012|reconnect|WARN|unix#479: connection dropped (Connection reset by peer)\n2020-04-05T17:38:09.714Z|00013|jsonrpc|WARN|unix#493: receive error: Connection reset by peer\n2020-04-05T17:38:09.714Z|00014|reconnect|WARN|unix#493: connection dropped (Connection reset by peer)\n2020-04-05T17:38:34.940Z|00015|jsonrpc|WARN|unix#522: receive error: Connection reset by peer\n2020-04-05T17:38:34.940Z|00016|reconnect|WARN|unix#522: connection dropped (Connection reset by peer)\n2020-04-05T17:38:52.894Z|00110|connmgr|INFO|br0<->unix#614: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:38:52.926Z|00111|connmgr|INFO|br0<->unix#617: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:38:52.949Z|00112|bridge|INFO|bridge br0: deleted interface vetha3acb037 on port 18\n2020-04-05T17:38:52.938Z|00017|jsonrpc|WARN|unix#537: receive error: Connection reset by peer\n2020-04-05T17:38:52.938Z|00018|reconnect|WARN|unix#537: connection dropped (Connection reset by peer)\n2020-04-05T17:38:52.943Z|00019|jsonrpc|WARN|unix#538: receive error: Connection reset by peer\n2020-04-05T17:38:52.943Z|00020|reconnect|WARN|unix#538: connection dropped (Connection reset by peer)\n2020-04-05 17:39:18 info: Saving flows ...\nTerminated\n
Apr 05 17:41:01.503 E ns/openshift-machine-config-operator pod/machine-config-daemon-cxsm8 node/ip-10-0-159-148.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:41:09.977 E ns/openshift-machine-config-operator pod/machine-config-daemon-cxsm8 node/ip-10-0-159-148.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 05 17:41:26.670 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-140.us-west-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/04/05 17:25:13 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 05 17:41:26.670 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-140.us-west-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/04/05 17:25:13 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/05 17:25:13 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/05 17:25:13 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/05 17:25:13 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/05 17:25:13 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/05 17:25:13 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/05 17:25:13 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/05 17:25:13 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0405 17:25:13.841015       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/05 17:25:13 http.go:107: HTTPS: listening on [::]:9091\n2020/04/05 17:38:26 oauthproxy.go:774: basicauth: 10.129.0.85:58046 Authorization header does not start with 'Basic', skipping basic authentication\n
Apr 05 17:41:26.670 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-131-140.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-05T17:25:13.204522612Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-05T17:25:13.207058446Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-05T17:25:18.354347187Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-05T17:25:18.35444656Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Apr 05 17:41:26.692 E ns/openshift-marketplace pod/certified-operators-74666cdc8-4d4kv node/ip-10-0-131-140.us-west-2.compute.internal container/certified-operators container exited with code 2 (Error): 
Apr 05 17:41:26.815 E ns/openshift-monitoring pod/telemeter-client-7f95b4977b-v49z7 node/ip-10-0-131-140.us-west-2.compute.internal container/reload container exited with code 2 (Error): 
Apr 05 17:41:26.815 E ns/openshift-monitoring pod/telemeter-client-7f95b4977b-v49z7 node/ip-10-0-131-140.us-west-2.compute.internal container/telemeter-client container exited with code 2 (Error): 
Apr 05 17:41:26.837 E ns/openshift-monitoring pod/grafana-5f8bff96d4-7ml27 node/ip-10-0-131-140.us-west-2.compute.internal container/grafana container exited with code 1 (Error): 
Apr 05 17:41:26.837 E ns/openshift-monitoring pod/grafana-5f8bff96d4-7ml27 node/ip-10-0-131-140.us-west-2.compute.internal container/grafana-proxy container exited with code 2 (Error): 
Apr 05 17:41:28.716 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-200.us-west-2.compute.internal node/ip-10-0-128-200.us-west-2.compute.internal container/cluster-policy-controller container exited with code 1 (Error): : Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=27633&timeout=8m54s&timeoutSeconds=534&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0405 17:23:02.428307       1 reflector.go:307] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to watch *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io)\nE0405 17:23:02.428406       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: the server is currently unable to handle the request (get builds.build.openshift.io)\nE0405 17:23:02.731251       1 reflector.go:307] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nE0405 17:23:02.731342       1 reflector.go:307] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io)\nE0405 17:23:02.737537       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: the server is currently unable to handle the request (get buildconfigs.build.openshift.io)\nE0405 17:23:02.737647       1 reflector.go:307] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: the server is currently unable to handle the request (get clusterresourcequotas.quota.openshift.io)\nW0405 17:38:08.484536       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 645; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 05 17:41:28.716 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-200.us-west-2.compute.internal node/ip-10-0-128-200.us-west-2.compute.internal container/kube-controller-manager-recovery-controller container exited with code 1 (Error): rsion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/csr-signer -n openshift-kube-controller-manager because it changed\nI0405 17:38:36.619269       1 csrcontroller.go:161] Refreshed CSRSigner.\nI0405 17:38:36.619350       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-kube-controller-manager", Name:"ip-10-0-128-200.us-west-2.compute.internal", UID:"8302ec81-dcf3-451e-b560-b957be1498e2", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/csr-signer -n openshift-kube-controller-manager because it changed\nI0405 17:38:43.147119       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0405 17:38:43.147553       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:38:43.149257       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:38:43.149369       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:38:43.149432       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:38:43.149483       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:38:43.149552       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:38:43.149602       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:38:43.149649       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:38:43.149683       1 csrcontroller.go:83] Shutting down CSR controller\nI0405 17:38:43.149711       1 csrcontroller.go:85] CSR controller shut down\n
Apr 05 17:41:28.716 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-200.us-west-2.compute.internal node/ip-10-0-128-200.us-west-2.compute.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 0779       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:38:07.921073       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:38:15.223049       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:38:15.223301       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:38:17.926457       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:38:17.927565       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:38:25.230882       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:38:25.231474       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:38:27.936293       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:38:27.936550       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:38:35.239996       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:38:35.240254       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:38:37.943062       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:38:37.943374       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 05 17:41:28.716 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-128-200.us-west-2.compute.internal node/ip-10-0-128-200.us-west-2.compute.internal container/kube-controller-manager container exited with code 2 (Error): er", UID:"7f24cfcf-5d94-4b5f-b9eb-4dc1c0f873ea", APIVersion:"apps/v1", ResourceVersion:"38300", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set csi-snapshot-controller-bb7bd4889 to 0\nI0405 17:38:25.796020       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-cluster-storage-operator", Name:"csi-snapshot-controller-bb7bd4889", UID:"6a34c651-772b-4b03-bff3-0ce883cffcce", APIVersion:"apps/v1", ResourceVersion:"38494", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: csi-snapshot-controller-bb7bd4889-xd68k\nI0405 17:38:27.331262       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/openshift-state-metrics: Operation cannot be fulfilled on deployments.apps "openshift-state-metrics": the object has been modified; please apply your changes to the latest version and try again\nI0405 17:38:28.327477       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/kube-state-metrics: Operation cannot be fulfilled on deployments.apps "kube-state-metrics": the object has been modified; please apply your changes to the latest version and try again\nI0405 17:38:28.922147       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/prometheus-operator: Operation cannot be fulfilled on deployments.apps "prometheus-operator": the object has been modified; please apply your changes to the latest version and try again\nI0405 17:38:39.338663       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/telemeter-client: Operation cannot be fulfilled on deployments.apps "telemeter-client": the object has been modified; please apply your changes to the latest version and try again\nI0405 17:38:42.538285       1 deployment_controller.go:485] Error syncing deployment openshift-monitoring/thanos-querier: Operation cannot be fulfilled on deployments.apps "thanos-querier": the object has been modified; please apply your changes to the latest version and try again\n
Apr 05 17:41:28.737 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-200.us-west-2.compute.internal node/ip-10-0-128-200.us-west-2.compute.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:38:23.541169       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:38:23.541191       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:38:25.560141       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:38:25.560160       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:38:27.569399       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:38:27.569490       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:38:29.577886       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:38:29.577905       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:38:31.586639       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:38:31.586674       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:38:33.595109       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:38:33.595132       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:38:35.603716       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:38:35.603743       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:38:37.612875       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:38:37.612897       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:38:39.623114       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:38:39.623218       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:38:41.632837       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:38:41.632964       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 05 17:41:28.737 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-128-200.us-west-2.compute.internal node/ip-10-0-128-200.us-west-2.compute.internal container/kube-scheduler container exited with code 2 (Error): s=true&resourceVersion=25887&timeout=6m13s&timeoutSeconds=373&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0405 17:23:02.349258       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0405 17:23:02.349561       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0405 17:23:02.349619       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0405 17:23:02.349649       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0405 17:23:02.349682       1 reflector.go:380] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: unknown (get pods)\nE0405 17:23:02.349712       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0405 17:23:02.349739       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0405 17:23:02.349768       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0405 17:23:02.389204       1 reflector.go:380] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0405 17:23:02.389336       1 leaderelection.go:320] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0405 17:23:02.389417       1 reflector.go:380] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\n
Apr 05 17:41:28.753 E ns/openshift-controller-manager pod/controller-manager-2dnxq node/ip-10-0-128-200.us-west-2.compute.internal container/controller-manager container exited with code 1 (Error): 7:25:08.487830       1 factory.go:80] Deployer controller caches are synced. Starting workers.\nI0405 17:25:08.618093       1 deleted_token_secrets.go:69] caches synced\nI0405 17:25:08.618097       1 deleted_dockercfg_secrets.go:74] caches synced\nI0405 17:25:08.618286       1 docker_registry_service.go:154] caches synced\nI0405 17:25:08.618312       1 create_dockercfg_secrets.go:218] urls found\nI0405 17:25:08.618317       1 create_dockercfg_secrets.go:224] caches synced\nI0405 17:25:08.618371       1 docker_registry_service.go:296] Updating registry URLs from map[172.30.78.44:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.78.44:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0405 17:25:08.651351       1 build_controller.go:474] Starting build controller\nI0405 17:25:08.651369       1 build_controller.go:476] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nW0405 17:38:08.480819       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 737; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:38:08.481095       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 559; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:38:08.481194       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 581; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 05 17:41:28.782 E ns/openshift-monitoring pod/node-exporter-gxcs2 node/ip-10-0-128-200.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -05T17:24:39Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:39Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 05 17:41:28.796 E ns/openshift-cluster-node-tuning-operator pod/tuned-rl8cj node/ip-10-0-128-200.us-west-2.compute.internal container/tuned container exited with code 143 (Error): penshift-control-plane)\nI0405 17:24:46.648166   83985 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0405 17:24:46.648202   83985 tuned.go:285] starting tuned...\n2020-04-05 17:24:46,746 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-05 17:24:46,753 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-05 17:24:46,753 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-05 17:24:46,754 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-05 17:24:46,754 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-05 17:24:46,786 INFO     tuned.daemon.controller: starting controller\n2020-04-05 17:24:46,786 INFO     tuned.daemon.daemon: starting tuning\n2020-04-05 17:24:46,796 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-05 17:24:46,797 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-05 17:24:46,800 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-05 17:24:46,801 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-04-05 17:24:46,802 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-04-05 17:24:46,874 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-05 17:24:46,882 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0405 17:38:43.144664   83985 tuned.go:114] received signal: terminated\nI0405 17:38:43.144693   83985 tuned.go:326] sending TERM to PID 84065\n2020-04-05 17:38:43,154 INFO     tuned.daemon.controller: terminating controller\n2020-04-05 17:38:43,154 INFO     tuned.daemon.daemon: stopping tuning\nI0405 17:38:43.242897   83985 tuned.go:486] profile "ip-10-0-128-200.us-west-2.compute.internal" changed, tuned profile requested: openshift-control-plane\n
Apr 05 17:41:28.817 E ns/openshift-sdn pod/sdn-controller-lnsrf node/ip-10-0-128-200.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): I0405 17:26:45.270039       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 05 17:41:28.843 E ns/openshift-sdn pod/ovs-rrlns node/ip-10-0-128-200.us-west-2.compute.internal container/openvswitch container exited with code 1 (Error):  the last 0 s (2 deletes)\n2020-04-05T17:38:11.584Z|00101|connmgr|INFO|br0<->unix#550: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:38:11.646Z|00102|bridge|INFO|bridge br0: deleted interface vethb02c8215 on port 56\n2020-04-05T17:38:13.622Z|00103|connmgr|INFO|br0<->unix#558: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:38:13.649Z|00104|connmgr|INFO|br0<->unix#561: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:38:13.675Z|00105|bridge|INFO|bridge br0: deleted interface veth1ef7a577 on port 61\n2020-04-05T17:38:16.510Z|00106|connmgr|INFO|br0<->unix#566: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:38:16.560Z|00107|connmgr|INFO|br0<->unix#569: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:38:16.623Z|00108|bridge|INFO|bridge br0: deleted interface veth7844cd1c on port 66\n2020-04-05T17:38:16.675Z|00109|connmgr|INFO|br0<->unix#572: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:38:16.856Z|00110|connmgr|INFO|br0<->unix#575: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:38:16.911Z|00111|bridge|INFO|bridge br0: deleted interface veth674c6ada on port 21\n2020-04-05T17:38:16.971Z|00112|connmgr|INFO|br0<->unix#578: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:38:17.112Z|00113|connmgr|INFO|br0<->unix#581: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:38:17.183Z|00114|bridge|INFO|bridge br0: deleted interface vethf128370a on port 64\n2020-04-05T17:38:34.612Z|00115|connmgr|INFO|br0<->unix#594: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:38:34.643Z|00116|connmgr|INFO|br0<->unix#597: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:38:34.665Z|00117|bridge|INFO|bridge br0: deleted interface vethdcb5c465 on port 62\n2020-04-05T17:38:34.658Z|00013|jsonrpc|WARN|unix#498: receive error: Connection reset by peer\n2020-04-05T17:38:34.658Z|00014|reconnect|WARN|unix#498: connection dropped (Connection reset by peer)\n2020-04-05 17:38:43 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 05 17:41:28.858 E ns/openshift-multus pod/multus-jdpxl node/ip-10-0-128-200.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 05 17:41:28.870 E ns/openshift-multus pod/multus-admission-controller-rssqj node/ip-10-0-128-200.us-west-2.compute.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 05 17:41:28.890 E ns/openshift-machine-config-operator pod/machine-config-daemon-7zl82 node/ip-10-0-128-200.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:41:28.901 E ns/openshift-machine-config-operator pod/machine-config-server-9vw58 node/ip-10-0-128-200.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0405 17:38:02.262775       1 start.go:38] Version: machine-config-daemon-4.5.0-202004051201-2-ga195251a-dirty (a195251a12c0e8f9d9994c2662d3eb31c0a50eb1)\nI0405 17:38:02.263612       1 api.go:51] Launching server on :22624\nI0405 17:38:02.263694       1 api.go:51] Launching server on :22623\n
Apr 05 17:41:32.673 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-200.us-west-2.compute.internal node/ip-10-0-128-200.us-west-2.compute.internal container/kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): W0405 17:22:58.342659       1 cmd.go:200] Using insecure, self-signed certificates\nI0405 17:22:58.343100       1 crypto.go:588] Generating new CA for cert-regeneration-controller-signer@1586107378 cert, and key in /tmp/serving-cert-228466917/serving-signer.crt, /tmp/serving-cert-228466917/serving-signer.key\nI0405 17:22:59.125797       1 observer_polling.go:155] Starting file observer\nI0405 17:23:02.820018       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\n
Apr 05 17:41:32.673 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-200.us-west-2.compute.internal node/ip-10-0-128-200.us-west-2.compute.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0405 17:22:58.780985       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 05 17:41:32.673 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-200.us-west-2.compute.internal node/ip-10-0-128-200.us-west-2.compute.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0405 17:38:23.755080       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:38:23.755328       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0405 17:38:33.763120       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:38:33.763373       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 05 17:41:32.727 E ns/openshift-etcd pod/etcd-ip-10-0-128-200.us-west-2.compute.internal node/ip-10-0-128-200.us-west-2.compute.internal container/etcd-metrics container exited with code 2 (Error): us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-128-200.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-05T17:21:51.875Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-05T17:21:51.876Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-128-200.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-128-200.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"warn","ts":"2020-04-05T17:21:51.878Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.128.200:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.128.200:9978: connect: connection refused\". Reconnecting..."}\n{"level":"info","ts":"2020-04-05T17:21:51.879Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-05T17:21:51.879Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-05T17:21:51.879Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-05T17:21:52.878Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.128.200:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.128.200:9978: connect: connection refused\". Reconnecting..."}\n
Apr 05 17:41:37.359 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-128-200.us-west-2.compute.internal" not ready since 2020-04-05 17:41:28 +0000 UTC because KubeletNotReady ([PLEG is not healthy: pleg has yet to be successful, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network])\nEtcdMembersDegraded: ip-10-0-128-200.us-west-2.compute.internal members are unhealthy,  members are unknown
Apr 05 17:41:40.244 E ns/openshift-machine-config-operator pod/machine-config-daemon-7zl82 node/ip-10-0-128-200.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 05 17:41:44.616 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-128-200.us-west-2.compute.internal" not ready since 2020-04-05 17:41:28 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 05 17:41:44.625 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-128-200.us-west-2.compute.internal" not ready since 2020-04-05 17:41:28 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 05 17:41:44.628 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-128-200.us-west-2.compute.internal" not ready since 2020-04-05 17:41:28 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 05 17:41:48.443 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-159-148.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-05T17:41:46.709Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-05T17:41:46.710Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-05T17:41:46.712Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-05T17:41:46.714Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-05T17:41:46.714Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-05T17:41:46.714Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-05T17:41:46.714Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-05T17:41:46.714Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-05T17:41:46.714Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-05T17:41:46.714Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-05T17:41:46.714Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-05T17:41:46.714Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-05T17:41:46.714Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-05T17:41:46.715Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-05T17:41:46.715Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-05T17:41:46.715Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-05
Apr 05 17:42:04.127 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Apr 05 17:42:05.290 E ns/openshift-machine-api pod/machine-api-controllers-746d59b9fd-tkfzm node/ip-10-0-142-68.us-west-2.compute.internal container/machineset-controller container exited with code 1 (Error): 
Apr 05 17:42:06.729 E ns/openshift-cluster-machine-approver pod/machine-approver-5884f6fdb-xtlhm node/ip-10-0-142-68.us-west-2.compute.internal container/machine-approver-controller container exited with code 2 (Error): .\nI0405 17:23:18.609378       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0405 17:23:18.609397       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0405 17:23:18.609439       1 main.go:236] Starting Machine Approver\nI0405 17:23:18.709622       1 main.go:146] CSR csr-72n9l added\nI0405 17:23:18.709716       1 main.go:149] CSR csr-72n9l is already approved\nI0405 17:23:18.709756       1 main.go:146] CSR csr-frzdk added\nI0405 17:23:18.709765       1 main.go:149] CSR csr-frzdk is already approved\nI0405 17:23:18.709772       1 main.go:146] CSR csr-nz2sw added\nI0405 17:23:18.709776       1 main.go:149] CSR csr-nz2sw is already approved\nI0405 17:23:18.709784       1 main.go:146] CSR csr-z2tsf added\nI0405 17:23:18.709789       1 main.go:149] CSR csr-z2tsf is already approved\nI0405 17:23:18.709799       1 main.go:146] CSR csr-6dcgp added\nI0405 17:23:18.709805       1 main.go:149] CSR csr-6dcgp is already approved\nI0405 17:23:18.709811       1 main.go:146] CSR csr-hwl7z added\nI0405 17:23:18.709815       1 main.go:149] CSR csr-hwl7z is already approved\nI0405 17:23:18.709820       1 main.go:146] CSR csr-kbt4l added\nI0405 17:23:18.709824       1 main.go:149] CSR csr-kbt4l is already approved\nI0405 17:23:18.709830       1 main.go:146] CSR csr-l6997 added\nI0405 17:23:18.709834       1 main.go:149] CSR csr-l6997 is already approved\nI0405 17:23:18.709860       1 main.go:146] CSR csr-pbhbb added\nI0405 17:23:18.709870       1 main.go:149] CSR csr-pbhbb is already approved\nI0405 17:23:18.709876       1 main.go:146] CSR csr-rx8ms added\nI0405 17:23:18.709879       1 main.go:149] CSR csr-rx8ms is already approved\nI0405 17:23:18.709885       1 main.go:146] CSR csr-t9rxr added\nI0405 17:23:18.709889       1 main.go:149] CSR csr-t9rxr is already approved\nI0405 17:23:18.709899       1 main.go:146] CSR csr-4tfk7 added\nI0405 17:23:18.709907       1 main.go:149] CSR csr-4tfk7 is already approved\n
Apr 05 17:42:22.790 E ns/openshift-console pod/console-768f89c5d6-htf4j node/ip-10-0-142-68.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-04-05T17:38:15Z cmd/main: cookies are secure!\n2020-04-05T17:38:15Z cmd/main: Binding to [::]:8443...\n2020-04-05T17:38:15Z cmd/main: using TLS\n2020-04-05T17:39:58Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Apr 05 17:43:26.718 E ns/openshift-marketplace pod/community-operators-848c8d75db-msgf7 node/ip-10-0-159-148.us-west-2.compute.internal container/community-operators container exited with code 2 (Error): 
Apr 05 17:43:34.750 E ns/openshift-marketplace pod/redhat-marketplace-6bd9cdbc56-lhftx node/ip-10-0-159-148.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Apr 05 17:44:17.260 E ns/openshift-monitoring pod/node-exporter-wl8rt node/ip-10-0-131-140.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -05T17:23:43Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-05T17:23:43Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 05 17:44:17.269 E ns/openshift-cluster-node-tuning-operator pod/tuned-xx6bd node/ip-10-0-131-140.us-west-2.compute.internal container/tuned container exited with code 143 (Error): \nI0405 17:42:28.927760   67534 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:42:28.927760   67534 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:42:29.634646   67534 tuned.go:527] tuned "rendered" changed\nI0405 17:42:29.634673   67534 tuned.go:218] extracting tuned profiles\nI0405 17:42:29.645740   67534 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0405 17:42:29.645767   67534 tuned.go:356] reloading tuned...\nI0405 17:42:29.645773   67534 tuned.go:359] sending HUP to PID 67754\n2020-04-05 17:42:29,645 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-05 17:42:30,230 INFO     tuned.daemon.daemon: terminating Tuned, rolling back all changes\n2020-04-05 17:42:30,242 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-05 17:42:30,242 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-05 17:42:30,243 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-05 17:42:30,276 INFO     tuned.daemon.daemon: starting tuning\n2020-04-05 17:42:30,278 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-05 17:42:30,279 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-05 17:42:30,282 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-05 17:42:30,284 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-05 17:42:30,284 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-05 17:42:30,289 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-05 17:42:30,296 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0405 17:42:34.907923   67534 tuned.go:114] received signal: terminated\nI0405 17:42:34.907971   67534 tuned.go:326] sending TERM to PID 67754\n
Apr 05 17:44:17.344 E ns/openshift-multus pod/multus-n8zch node/ip-10-0-131-140.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 05 17:44:17.345 E ns/openshift-sdn pod/ovs-5mqr6 node/ip-10-0-131-140.us-west-2.compute.internal container/openvswitch container exited with code 1 (Error): the last 0 s (4 deletes)\n2020-04-05T17:41:26.559Z|00126|bridge|INFO|bridge br0: deleted interface veth7618a529 on port 38\n2020-04-05T17:41:26.614Z|00127|connmgr|INFO|br0<->unix#720: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:41:26.666Z|00128|connmgr|INFO|br0<->unix#723: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:41:26.705Z|00129|bridge|INFO|bridge br0: deleted interface veth0dda301c on port 37\n2020-04-05T17:41:26.774Z|00130|connmgr|INFO|br0<->unix#726: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:41:26.910Z|00131|connmgr|INFO|br0<->unix#729: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:41:26.948Z|00132|bridge|INFO|bridge br0: deleted interface veth3a6435fd on port 40\n2020-04-05T17:41:27.022Z|00133|connmgr|INFO|br0<->unix#732: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:41:27.065Z|00134|connmgr|INFO|br0<->unix#735: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:41:27.110Z|00135|bridge|INFO|bridge br0: deleted interface veth623611db on port 27\n2020-04-05T17:41:53.987Z|00136|connmgr|INFO|br0<->unix#756: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:41:54.019Z|00137|connmgr|INFO|br0<->unix#759: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:41:54.051Z|00138|bridge|INFO|bridge br0: deleted interface vethf4189c8a on port 22\n2020-04-05T17:42:10.299Z|00139|connmgr|INFO|br0<->unix#772: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:42:10.335Z|00140|connmgr|INFO|br0<->unix#775: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:42:10.362Z|00141|bridge|INFO|bridge br0: deleted interface vethbe119f99 on port 23\n2020-04-05T17:42:11.163Z|00142|connmgr|INFO|br0<->unix#783: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:42:11.195Z|00143|connmgr|INFO|br0<->unix#786: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:42:11.220Z|00144|bridge|INFO|bridge br0: deleted interface veth6f9e93f8 on port 25\n2020-04-05 17:42:34 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 05 17:44:17.347 E ns/openshift-machine-config-operator pod/machine-config-daemon-27x49 node/ip-10-0-131-140.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:44:28.102 E ns/openshift-machine-config-operator pod/machine-config-daemon-27x49 node/ip-10-0-131-140.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 05 17:44:28.428 E ns/openshift-controller-manager pod/controller-manager-p7jrb node/ip-10-0-142-68.us-west-2.compute.internal container/controller-manager container exited with code 1 (Error): I0405 17:24:18.179785       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0405 17:24:18.181265       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-c6lxz8nf/stable@sha256:0de8d8c1ed3a3bfbc6cbe0b54e060b47bdd23df5bb4e15d894f2eadc2790c9ee"\nI0405 17:24:18.182613       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-c6lxz8nf/stable@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0405 17:24:18.182574       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0405 17:24:18.183011       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 05 17:44:28.449 E ns/openshift-cluster-node-tuning-operator pod/tuned-pmnbz node/ip-10-0-142-68.us-west-2.compute.internal container/tuned container exited with code 143 (Error): ned.go:285] starting tuned...\n2020-04-05 17:24:53,522 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-05 17:24:53,529 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-05 17:24:53,530 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-05 17:24:53,530 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-05 17:24:53,531 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-05 17:24:53,568 INFO     tuned.daemon.controller: starting controller\n2020-04-05 17:24:53,569 INFO     tuned.daemon.daemon: starting tuning\n2020-04-05 17:24:53,580 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-05 17:24:53,580 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-05 17:24:53,584 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-05 17:24:53,587 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-04-05 17:24:53,589 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-04-05 17:24:53,672 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-05 17:24:53,687 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0405 17:38:43.089174   88947 tuned.go:486] profile "ip-10-0-142-68.us-west-2.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0405 17:38:43.254176   88947 tuned.go:392] getting recommended profile...\nI0405 17:38:43.486313   88947 tuned.go:428] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n2020-04-05 17:42:28,659 INFO     tuned.daemon.controller: terminating controller\nI0405 17:42:28.774493   88947 tuned.go:114] received signal: terminated\nI0405 17:42:28.774655   88947 tuned.go:326] sending TERM to PID 89045\n
Apr 05 17:44:28.481 E ns/openshift-monitoring pod/node-exporter-qt8x6 node/ip-10-0-142-68.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -05T17:25:05Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-05T17:25:05Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 05 17:44:28.493 E ns/openshift-sdn pod/sdn-controller-95prl node/ip-10-0-142-68.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): I0405 17:26:51.742000       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 05 17:44:28.507 E ns/openshift-sdn pod/ovs-vpc8j node/ip-10-0-142-68.us-west-2.compute.internal container/openvswitch container exited with code 1 (Error): r0<->unix#913: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:42:05.228Z|00176|connmgr|INFO|br0<->unix#916: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:42:05.282Z|00177|bridge|INFO|bridge br0: deleted interface veth91b0853b on port 68\n2020-04-05T17:42:05.488Z|00178|connmgr|INFO|br0<->unix#920: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:42:05.564Z|00179|connmgr|INFO|br0<->unix#923: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:42:05.613Z|00180|bridge|INFO|bridge br0: deleted interface veth5125b6e5 on port 76\n2020-04-05T17:42:05.716Z|00181|connmgr|INFO|br0<->unix#926: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:42:05.797Z|00182|connmgr|INFO|br0<->unix#929: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:42:05.887Z|00183|bridge|INFO|bridge br0: deleted interface veth93ba8187 on port 87\n2020-04-05T17:42:05.496Z|00007|jsonrpc|WARN|unix#746: send error: Broken pipe\n2020-04-05T17:42:05.496Z|00008|reconnect|WARN|unix#746: connection dropped (Broken pipe)\n2020-04-05T17:42:16.021Z|00184|connmgr|INFO|br0<->unix#940: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:42:16.060Z|00185|connmgr|INFO|br0<->unix#943: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:42:16.091Z|00186|bridge|INFO|bridge br0: deleted interface vethf24e26c4 on port 31\n2020-04-05T17:42:21.833Z|00187|connmgr|INFO|br0<->unix#949: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:42:21.865Z|00188|connmgr|INFO|br0<->unix#952: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:42:21.890Z|00189|bridge|INFO|bridge br0: deleted interface veth735866f5 on port 75\n2020-04-05T17:42:21.931Z|00190|connmgr|INFO|br0<->unix#955: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:42:22.080Z|00191|connmgr|INFO|br0<->unix#958: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:42:22.117Z|00192|bridge|INFO|bridge br0: deleted interface veth0e6695a2 on port 84\n2020-04-05 17:42:28 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 05 17:44:28.533 E ns/openshift-multus pod/multus-admission-controller-db97s node/ip-10-0-142-68.us-west-2.compute.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 05 17:44:28.554 E ns/openshift-multus pod/multus-4f86q node/ip-10-0-142-68.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 05 17:44:28.579 E ns/openshift-machine-config-operator pod/machine-config-daemon-4m7zl node/ip-10-0-142-68.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:44:28.592 E ns/openshift-machine-config-operator pod/machine-config-server-bpsg2 node/ip-10-0-142-68.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0405 17:38:24.542851       1 start.go:38] Version: machine-config-daemon-4.5.0-202004051201-2-ga195251a-dirty (a195251a12c0e8f9d9994c2662d3eb31c0a50eb1)\nI0405 17:38:24.545493       1 api.go:51] Launching server on :22624\nI0405 17:38:24.545606       1 api.go:51] Launching server on :22623\n
Apr 05 17:44:28.654 E ns/openshift-etcd pod/etcd-ip-10-0-142-68.us-west-2.compute.internal node/ip-10-0-142-68.us-west-2.compute.internal container/etcd-metrics container exited with code 2 (Error): 142-68.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-142-68.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-05T17:17:51.195Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-05T17:17:51.195Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-142-68.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-142-68.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-05T17:17:51.200Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-05T17:17:51.200Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-05T17:17:51.200Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-05T17:17:51.200Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.142.68:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.142.68:9978: connect: connection refused\". Reconnecting..."}\n{"level":"warn","ts":"2020-04-05T17:17:52.200Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.142.68:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.142.68:9978: connect: connection refused\". Reconnecting..."}\n
Apr 05 17:44:28.668 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-68.us-west-2.compute.internal node/ip-10-0-142-68.us-west-2.compute.internal container/kube-apiserver container exited with code 1 (Error): sc = "transport is closing"\nI0405 17:42:28.789130       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.789372       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.789509       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.789742       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.789994       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.790116       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.790233       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.796769       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.797017       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.797232       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.797905       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.798108       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:42:28.808541       1 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick\nW0405 17:42:28.808753       1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://10.0.142.68:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.142.68:2379: connect: connection refused". Reconnecting...\n
Apr 05 17:44:28.668 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-68.us-west-2.compute.internal node/ip-10-0-142-68.us-west-2.compute.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0405 17:19:01.025640       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 05 17:44:28.668 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-142-68.us-west-2.compute.internal node/ip-10-0-142-68.us-west-2.compute.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0405 17:42:14.042149       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:42:14.042488       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0405 17:42:24.049475       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:42:24.049720       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 05 17:44:28.689 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-68.us-west-2.compute.internal node/ip-10-0-142-68.us-west-2.compute.internal container/kube-controller-manager-recovery-controller container exited with code 1 (Error): rsion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/csr-signer -n openshift-kube-controller-manager because it changed\nI0405 17:42:23.069000       1 csrcontroller.go:161] Refreshed CSRSigner.\nI0405 17:42:23.069109       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"openshift-kube-controller-manager", Name:"ip-10-0-128-200.us-west-2.compute.internal", UID:"8302ec81-dcf3-451e-b560-b957be1498e2", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/csr-signer -n openshift-kube-controller-manager because it changed\nI0405 17:42:28.628983       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0405 17:42:28.629409       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:42:28.629545       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:42:28.633327       1 csrcontroller.go:83] Shutting down CSR controller\nI0405 17:42:28.633370       1 csrcontroller.go:85] CSR controller shut down\nI0405 17:42:28.633452       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:42:28.633518       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:42:28.633573       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:42:28.633620       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:42:28.633676       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\nI0405 17:42:28.633730       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0/tools/cache/reflector.go:125\n
Apr 05 17:44:28.689 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-68.us-west-2.compute.internal node/ip-10-0-142-68.us-west-2.compute.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 6419       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:41:57.156714       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:42:02.677979       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:42:02.678419       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:42:07.170028       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:42:07.170315       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:42:12.688595       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:42:12.688966       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:42:17.187754       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:42:17.190105       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:42:22.695567       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:42:22.695873       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:42:27.198425       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:42:27.198753       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 05 17:44:28.689 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-68.us-west-2.compute.internal node/ip-10-0-142-68.us-west-2.compute.internal container/kube-controller-manager container exited with code 2 (Error): loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-04-05 16:45:16 +0000 UTC to 2030-04-03 16:45:16 +0000 UTC (now=2020-04-05 17:19:44.576104677 +0000 UTC))\nI0405 17:19:44.576139       1 tlsconfig.go:178] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-04-05 16:45:19 +0000 UTC to 2020-04-06 16:45:19 +0000 UTC (now=2020-04-05 17:19:44.576129096 +0000 UTC))\nI0405 17:19:44.576371       1 tlsconfig.go:200] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1586105747" (2020-04-05 16:55:57 +0000 UTC to 2022-04-05 16:55:58 +0000 UTC (now=2020-04-05 17:19:44.576362359 +0000 UTC))\nI0405 17:19:44.576534       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1586107184" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1586107184" (2020-04-05 16:19:44 +0000 UTC to 2021-04-05 16:19:44 +0000 UTC (now=2020-04-05 17:19:44.576527058 +0000 UTC))\nI0405 17:19:44.576564       1 secure_serving.go:178] Serving securely on [::]:10257\nI0405 17:19:44.576599       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0405 17:19:44.576640       1 tlsconfig.go:240] Starting DynamicServingCertificateController\n
Apr 05 17:44:28.689 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-142-68.us-west-2.compute.internal node/ip-10-0-142-68.us-west-2.compute.internal container/cluster-policy-controller container exited with code 255 (Error): ealth checks at 0.0.0.0:10357\nI0405 17:19:48.530889       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0405 17:42:31.668335       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0405 17:42:51.868650       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0405 17:43:05.249345       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0405 17:43:15.435040       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0405 17:43:36.195488       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\nE0405 17:43:56.108837       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Apr 05 17:44:28.705 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-142-68.us-west-2.compute.internal node/ip-10-0-142-68.us-west-2.compute.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:42:08.899009       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:42:08.899028       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:42:10.914051       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:42:10.914075       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:42:12.923388       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:42:12.923409       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:42:14.936525       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:42:14.936551       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:42:16.944266       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:42:16.944322       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:42:18.954559       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:42:18.954579       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:42:20.963996       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:42:20.964101       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:42:22.973500       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:42:22.973605       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:42:24.984478       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:42:24.984503       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:42:26.995044       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:42:26.995065       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 05 17:44:28.705 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-142-68.us-west-2.compute.internal node/ip-10-0-142-68.us-west-2.compute.internal container/kube-scheduler container exited with code 2 (Error): n-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-04-05 16:45:16 +0000 UTC to 2030-04-03 16:45:16 +0000 UTC (now=2020-04-05 17:20:03.625215406 +0000 UTC))\nI0405 17:20:03.625248       1 tlsconfig.go:178] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1586105752" [] issuer="kubelet-signer" (2020-04-05 16:55:51 +0000 UTC to 2020-04-06 16:45:21 +0000 UTC (now=2020-04-05 17:20:03.625240042 +0000 UTC))\nI0405 17:20:03.625267       1 tlsconfig.go:178] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-04-05 16:45:19 +0000 UTC to 2020-04-06 16:45:19 +0000 UTC (now=2020-04-05 17:20:03.625261885 +0000 UTC))\nI0405 17:20:03.625463       1 tlsconfig.go:200] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1586105747" (2020-04-05 16:56:03 +0000 UTC to 2022-04-05 16:56:04 +0000 UTC (now=2020-04-05 17:20:03.625456132 +0000 UTC))\nI0405 17:20:03.625650       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1586107203" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1586107202" (2020-04-05 16:20:02 +0000 UTC to 2021-04-05 16:20:02 +0000 UTC (now=2020-04-05 17:20:03.625639441 +0000 UTC))\nI0405 17:20:03.626451       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-scheduler/kube-scheduler...\n
Apr 05 17:44:38.906 E ns/openshift-machine-config-operator pod/machine-config-daemon-4m7zl node/ip-10-0-142-68.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 05 17:44:45.132 E ns/openshift-marketplace pod/certified-operators-5b7fd6ffbd-zwgj4 node/ip-10-0-130-4.us-west-2.compute.internal container/certified-operators container exited with code 2 (Error): 
Apr 05 17:44:46.163 E ns/openshift-marketplace pod/redhat-operators-5f97dd4f7d-cjt92 node/ip-10-0-130-4.us-west-2.compute.internal container/redhat-operators container exited with code 2 (Error): 
Apr 05 17:44:46.230 E ns/openshift-marketplace pod/community-operators-78bb7686f7-ld9zd node/ip-10-0-130-4.us-west-2.compute.internal container/community-operators container exited with code 2 (Error): 
Apr 05 17:44:47.366 E ns/openshift-monitoring pod/kube-state-metrics-77d4765df-d4gbl node/ip-10-0-130-4.us-west-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 05 17:44:47.455 E ns/openshift-monitoring pod/prometheus-adapter-7dd8bcb486-bbjz8 node/ip-10-0-130-4.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0405 17:38:13.975331       1 adapter.go:93] successfully using in-cluster auth\nI0405 17:38:14.792328       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 05 17:44:47.478 E ns/openshift-kube-storage-version-migrator pod/migrator-6d558994df-j7b2f node/ip-10-0-130-4.us-west-2.compute.internal container/migrator container exited with code 2 (Error): 
Apr 05 17:44:47.511 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-130-4.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/04/05 17:38:24 Watching directory: "/etc/alertmanager/config"\n
Apr 05 17:44:47.511 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-130-4.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/05 17:38:24 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/05 17:38:24 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/05 17:38:24 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/05 17:38:24 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/05 17:38:24 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/05 17:38:24 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/05 17:38:24 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/05 17:38:24 http.go:107: HTTPS: listening on [::]:9095\nI0405 17:38:24.331161       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 05 17:44:47.579 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-130-4.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/04/05 17:24:51 Watching directory: "/etc/alertmanager/config"\n
Apr 05 17:44:47.579 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-130-4.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-monitoring_alertmanager-main-0_01e2b592-5d32-474d-a64b-3624615a5b2e/alertmanager-proxy/0.log": lstat /var/log/pods/openshift-monitoring_alertmanager-main-0_01e2b592-5d32-474d-a64b-3624615a5b2e/alertmanager-proxy/0.log: no such file or directory
Apr 05 17:44:47.846 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Apr 05 17:44:48.424 E ns/openshift-monitoring pod/thanos-querier-6ffbf7cf6f-t5gnc node/ip-10-0-130-4.us-west-2.compute.internal container/oauth-proxy container exited with code 2 (Error): 2020/04/05 17:38:13 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/05 17:38:13 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/05 17:38:13 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/05 17:38:13 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/05 17:38:13 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/05 17:38:13 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/05 17:38:13 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/05 17:38:13 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/05 17:38:13 http.go:107: HTTPS: listening on [::]:9091\nI0405 17:38:13.914785       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 05 17:44:55.918 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-8554cffc6-bp2nf node/ip-10-0-153-86.us-west-2.compute.internal container/operator container exited with code 255 (Error): 8ms) 200 [Prometheus/2.15.2 10.129.2.33:59222]\nI0405 17:44:08.675924       1 request.go:565] Throttling request took 163.239087ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0405 17:44:08.875928       1 request.go:565] Throttling request took 197.092415ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0405 17:44:23.671948       1 httplog.go:90] GET /metrics: (5.583978ms) 200 [Prometheus/2.15.2 10.128.2.21:43204]\nI0405 17:44:28.676042       1 request.go:565] Throttling request took 142.35529ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0405 17:44:28.876035       1 request.go:565] Throttling request took 195.87213ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0405 17:44:36.906540       1 httplog.go:90] GET /metrics: (24.940206ms) 200 [Prometheus/2.15.2 10.129.2.33:59222]\nI0405 17:44:48.674489       1 request.go:565] Throttling request took 162.22868ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0405 17:44:48.874479       1 request.go:565] Throttling request took 197.383476ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0405 17:44:53.145207       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0405 17:44:53.145591       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0405 17:44:53.145710       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0405 17:44:53.145791       1 operator.go:141] Shutting down OpenShiftControllerManagerOperator\nF0405 17:44:53.145739       1 builder.go:210] server exited\n
Apr 05 17:45:05.445 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-7d786dcd6b-6vgnv node/ip-10-0-153-86.us-west-2.compute.internal container/kube-storage-version-migrator-operator container exited with code 255 (Error): -version-migrator-operator", UID:"98b83454-9bef-48f6-b099-f3cefd49aa79", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from False to True ("Progressing: deployment/migrator.openshift-kube-storage-version-migrator:: observed generation is 1, desired generation is 2.")\nI0405 17:23:12.481861       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"98b83454-9bef-48f6-b099-f3cefd49aa79", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("")\nI0405 17:44:44.516639       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"98b83454-9bef-48f6-b099-f3cefd49aa79", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from True to False ("Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available")\nI0405 17:44:53.487951       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"98b83454-9bef-48f6-b099-f3cefd49aa79", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0405 17:44:59.127729       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0405 17:44:59.127860       1 leaderelection.go:66] leaderelection lost\n
Apr 05 17:45:12.426 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-131-140.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-05T17:45:10.583Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-05T17:45:10.588Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-05T17:45:10.597Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-05T17:45:10.600Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-05T17:45:10.600Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-05T17:45:10.600Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-05T17:45:10.600Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-05T17:45:10.600Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-05T17:45:10.600Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-05T17:45:10.600Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-05T17:45:10.600Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-05T17:45:10.601Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-05T17:45:10.601Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-05T17:45:10.602Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-05T17:45:10.602Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=info ts=2020-04-05T17:45:10.602Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=error ts=2020-04-05
Apr 05 17:45:37.313 E kube-apiserver failed contacting the API: Get https://api.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=46196&timeout=6m36s&timeoutSeconds=396&watch=true: dial tcp 44.233.182.107:6443: connect: connection refused
Apr 05 17:45:43.230 E kube-apiserver Kube API started failing: Get https://api.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 05 17:47:41.867 E ns/openshift-cluster-node-tuning-operator pod/tuned-j5fst node/ip-10-0-130-4.us-west-2.compute.internal container/tuned container exited with code 143 (Error): t-node\nI0405 17:45:39.172543   51802 tuned.go:392] getting recommended profile...\nI0405 17:45:39.283857   51802 tuned.go:428] active and recommended profile (openshift-node) match; profile change will not trigger profile reload\nI0405 17:45:44.395026   51802 tuned.go:527] tuned "rendered" changed\nI0405 17:45:44.395054   51802 tuned.go:218] extracting tuned profiles\nI0405 17:45:45.172465   51802 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0405 17:45:45.172494   51802 tuned.go:356] reloading tuned...\nI0405 17:45:45.172501   51802 tuned.go:359] sending HUP to PID 51910\n2020-04-05 17:45:45,172 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-05 17:45:46,106 INFO     tuned.daemon.daemon: terminating Tuned, rolling back all changes\n2020-04-05 17:45:46,114 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-05 17:45:46,114 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-05 17:45:46,115 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-05 17:45:46,146 INFO     tuned.daemon.daemon: starting tuning\n2020-04-05 17:45:46,148 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-05 17:45:46,148 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-05 17:45:46,151 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-05 17:45:46,153 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-05 17:45:46,154 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-05 17:45:46,158 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-05 17:45:46,167 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-04-05 17:45:55,131 INFO     tuned.daemon.controller: terminating controller\n2020-04-05 17:45:55,132 INFO     tuned.daemon.daemon: stopping tuning\n
Apr 05 17:47:41.885 E ns/openshift-monitoring pod/node-exporter-h9vwj node/ip-10-0-130-4.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -05T17:24:18Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:18Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 05 17:47:41.924 E ns/openshift-sdn pod/ovs-9xb7w node/ip-10-0-130-4.us-west-2.compute.internal container/openvswitch container exited with code 1 (Error): e last 0 s (4 deletes)\n2020-04-05T17:44:47.216Z|00157|bridge|INFO|bridge br0: deleted interface vethcb275198 on port 34\n2020-04-05T17:44:47.262Z|00158|connmgr|INFO|br0<->unix#1010: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:44:47.309Z|00159|connmgr|INFO|br0<->unix#1013: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:44:47.333Z|00021|jsonrpc|WARN|unix#883: receive error: Connection reset by peer\n2020-04-05T17:44:47.333Z|00022|reconnect|WARN|unix#883: connection dropped (Connection reset by peer)\n2020-04-05T17:44:47.342Z|00160|bridge|INFO|bridge br0: deleted interface veth1a1dad53 on port 31\n2020-04-05T17:45:14.291Z|00161|connmgr|INFO|br0<->unix#1037: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:45:14.321Z|00162|connmgr|INFO|br0<->unix#1040: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:45:14.344Z|00163|bridge|INFO|bridge br0: deleted interface veth749aa2d2 on port 16\n2020-04-05T17:45:29.321Z|00023|jsonrpc|WARN|unix#920: receive error: Connection reset by peer\n2020-04-05T17:45:29.321Z|00024|reconnect|WARN|unix#920: connection dropped (Connection reset by peer)\n2020-04-05T17:45:29.326Z|00025|jsonrpc|WARN|unix#921: receive error: Connection reset by peer\n2020-04-05T17:45:29.326Z|00026|reconnect|WARN|unix#921: connection dropped (Connection reset by peer)\n2020-04-05T17:45:29.275Z|00164|connmgr|INFO|br0<->unix#1052: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:45:29.310Z|00165|connmgr|INFO|br0<->unix#1055: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:45:29.332Z|00166|bridge|INFO|bridge br0: deleted interface veth6d82229a on port 29\n2020-04-05T17:45:30.923Z|00167|connmgr|INFO|br0<->unix#1062: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:45:30.951Z|00168|connmgr|INFO|br0<->unix#1065: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:45:30.972Z|00169|bridge|INFO|bridge br0: deleted interface veth56bbcb9c on port 23\n2020-04-05 17:45:55 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 05 17:47:41.967 E ns/openshift-multus pod/multus-d72h4 node/ip-10-0-130-4.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 05 17:47:41.983 E ns/openshift-machine-config-operator pod/machine-config-daemon-6dxk5 node/ip-10-0-130-4.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:47:49.745 E clusteroperator/console changed Degraded to True: RouteHealth_FailedGet: RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com/health): Get https://console-openshift-console.apps.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Apr 05 17:47:51.255 E ns/openshift-machine-config-operator pod/machine-config-daemon-6dxk5 node/ip-10-0-130-4.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 05 17:48:11.886 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/cluster-policy-controller container exited with code 1 (Error): tch stream: stream error: stream ID 285; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:41:55.996770       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 339; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:44:58.445197       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 435; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:44:58.445272       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 437; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:44:58.445322       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 331; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:44:58.445368       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 347; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:44:58.445415       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 433; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 05 17:48:11.886 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 3643       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:45:02.243922       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:45:07.433747       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:45:07.453910       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:45:12.256680       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:45:12.257056       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:45:17.445347       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:45:17.445781       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:45:22.261862       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:45:22.262174       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:45:27.453443       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:45:27.453777       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0405 17:45:32.271798       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:45:32.272171       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 05 17:48:11.886 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/kube-controller-manager container exited with code 2 (Error): hift-7c79f69469-b6xfv\nI0405 17:45:26.066120       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication", Name:"oauth-openshift", UID:"279d404f-e85d-470a-81ed-23be856d47cd", APIVersion:"apps/v1", ResourceVersion:"46072", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set oauth-openshift-5bdff77678 to 2\nI0405 17:45:26.145505       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-authentication", Name:"oauth-openshift-5bdff77678", UID:"a377cf82-1bc4-4291-873d-c39b4fbfb640", APIVersion:"apps/v1", ResourceVersion:"46077", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: oauth-openshift-5bdff77678-srcm6\nI0405 17:45:31.439430       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication", Name:"oauth-openshift", UID:"279d404f-e85d-470a-81ed-23be856d47cd", APIVersion:"apps/v1", ResourceVersion:"46095", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set oauth-openshift-7c79f69469 to 0\nI0405 17:45:31.439800       1 replica_set.go:598] Too many replicas for ReplicaSet openshift-authentication/oauth-openshift-7c79f69469, need 0, deleting 1\nI0405 17:45:31.439858       1 replica_set.go:226] Found 5 related ReplicaSets for ReplicaSet openshift-authentication/oauth-openshift-7c79f69469: oauth-openshift-7c79f69469, oauth-openshift-7cf9d44b64, oauth-openshift-77c65b6885, oauth-openshift-784df8cc9c, oauth-openshift-5bdff77678\nI0405 17:45:31.439957       1 controller_utils.go:604] Controller oauth-openshift-7c79f69469 deleting pod openshift-authentication/oauth-openshift-7c79f69469-bzpgd\nI0405 17:45:31.463220       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-authentication", Name:"oauth-openshift-7c79f69469", UID:"511638e0-8bab-4f28-8453-576a164de53f", APIVersion:"apps/v1", ResourceVersion:"46180", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: oauth-openshift-7c79f69469-bzpgd\n
Apr 05 17:48:11.903 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:45:16.809772       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:45:16.809801       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:45:18.818811       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:45:18.818837       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:45:20.828788       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:45:20.828829       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:45:22.836404       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:45:22.836536       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:45:24.902957       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:45:24.902981       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:45:26.922918       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:45:26.922938       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:45:28.931420       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:45:28.931498       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:45:30.946761       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:45:30.946871       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:45:32.954295       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:45:32.954318       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0405 17:45:34.964123       1 certsync_controller.go:65] Syncing configmaps: []\nI0405 17:45:34.964230       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 05 17:48:11.903 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/kube-scheduler container exited with code 2 (Error): ble: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0405 17:45:15.645183       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7bf5b4bcf9-t8h45: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0405 17:45:17.915733       1 scheduler.go:728] pod openshift-operator-lifecycle-manager/packageserver-5c8cffb887-l2z24 is bound successfully on node "ip-10-0-142-68.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0405 17:45:25.633051       1 factory.go:462] Unable to schedule openshift-apiserver/apiserver-d7f6c9d98-z9fcq: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0405 17:45:26.188081       1 scheduler.go:728] pod openshift-authentication/oauth-openshift-5bdff77678-srcm6 is bound successfully on node "ip-10-0-128-200.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0405 17:45:26.629152       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-7bf5b4bcf9-t8h45: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0405 17:45:36.632712       1 factory.go:462] Unable to schedule openshift-apiserver/apiserver-d7f6c9d98-z9fcq: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Apr 05 17:48:11.943 E ns/openshift-cluster-node-tuning-operator pod/tuned-ddsr7 node/ip-10-0-153-86.us-west-2.compute.internal container/tuned container exited with code 143 (Error): ed.go:428] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0405 17:42:28.919944   84627 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:42:28.958526   84627 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:42:29.324006   84627 tuned.go:527] tuned "rendered" changed\nI0405 17:42:29.324026   84627 tuned.go:218] extracting tuned profiles\nI0405 17:42:30.250516   84627 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0405 17:42:30.250569   84627 tuned.go:356] reloading tuned...\nI0405 17:42:30.250575   84627 tuned.go:359] sending HUP to PID 84819\n2020-04-05 17:42:30,250 INFO     tuned.daemon.daemon: stopping tuning\n2020-04-05 17:42:31,149 INFO     tuned.daemon.daemon: terminating Tuned, rolling back all changes\n2020-04-05 17:42:31,306 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-05 17:42:31,307 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-05 17:42:31,308 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-05 17:42:31,585 INFO     tuned.daemon.daemon: starting tuning\n2020-04-05 17:42:31,588 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-05 17:42:31,592 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-05 17:42:31,599 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-05 17:42:31,601 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-04-05 17:42:31,603 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-04-05 17:42:31,609 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-05 17:42:31,619 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Apr 05 17:48:11.971 E ns/openshift-controller-manager pod/controller-manager-srcx9 node/ip-10-0-153-86.us-west-2.compute.internal container/controller-manager container exited with code 1 (Error): onnect: connection refused\nE0405 17:42:28.950044       1 reflector.go:320] github.com/openshift/openshift-controller-manager/pkg/unidling/controller/unidling_controller.go:199: Failed to watch *v1.Event: Get https://172.30.0.1:443/api/v1/events?allowWatchBookmarks=true&fieldSelector=reason%3DNeedPods&resourceVersion=39158&timeout=7m51s&timeoutSeconds=471&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0405 17:44:58.423006       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 31; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:44:58.423088       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 27; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:44:58.423161       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 21; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:44:58.423261       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 23; INTERNAL_ERROR") has prevented the request from succeeding\nW0405 17:44:58.423336       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 39; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 05 17:48:11.987 E ns/openshift-monitoring pod/node-exporter-vvjhq node/ip-10-0-153-86.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -05T17:24:50Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-05T17:24:50Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 05 17:48:12.003 E ns/openshift-sdn pod/sdn-controller-ksbz6 node/ip-10-0-153-86.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): bels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-153-86\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-05T16:54:30Z\",\"renewTime\":\"2020-04-05T17:27:11Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"openshift-sdn-controller", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00047d320), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00047d340)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-153-86 became leader'\nI0405 17:27:11.868137       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0405 17:27:11.880385       1 master.go:51] Initializing SDN master\nI0405 17:27:11.966290       1 network_controller.go:61] Started OpenShift Network Controller\nI0405 17:42:28.930713       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:42:28.931167       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:42:28.931343       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0405 17:42:28.942716       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0405 17:42:28.960541       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.HostSubnet: Get https://api-int.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/hostsubnets?allowWatchBookmarks=true&resourceVersion=38693&timeout=7m19s&timeoutSeconds=439&watch=true: dial tcp 10.0.133.194:6443: connect: connection refused\n
Apr 05 17:48:12.040 E ns/openshift-sdn pod/ovs-w7f7k node/ip-10-0-153-86.us-west-2.compute.internal container/openvswitch container exited with code 143 (Error):  port 94\n2020-04-05T17:45:12.690Z|00186|connmgr|INFO|br0<->unix#1010: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:45:12.824Z|00187|connmgr|INFO|br0<->unix#1013: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:45:12.871Z|00188|bridge|INFO|bridge br0: deleted interface veth53c0700e on port 93\n2020-04-05T17:45:15.687Z|00189|connmgr|INFO|br0<->unix#1017: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:45:15.731Z|00190|connmgr|INFO|br0<->unix#1020: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:45:15.772Z|00191|bridge|INFO|bridge br0: deleted interface vethc988a634 on port 95\n2020-04-05T17:45:18.432Z|00192|connmgr|INFO|br0<->unix#1025: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:45:18.459Z|00193|connmgr|INFO|br0<->unix#1028: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:45:18.483Z|00194|bridge|INFO|bridge br0: deleted interface veth84d6650c on port 85\n2020-04-05T17:45:24.669Z|00013|jsonrpc|WARN|unix#905: receive error: Connection reset by peer\n2020-04-05T17:45:24.669Z|00014|reconnect|WARN|unix#905: connection dropped (Connection reset by peer)\n2020-04-05T17:45:24.601Z|00195|connmgr|INFO|br0<->unix#1035: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:45:24.644Z|00196|connmgr|INFO|br0<->unix#1038: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:45:24.680Z|00197|bridge|INFO|bridge br0: deleted interface veth0a8445b0 on port 82\n2020-04-05T17:45:30.546Z|00198|bridge|INFO|bridge br0: added interface veth5b6b8e2c on port 96\n2020-04-05T17:45:30.602Z|00199|connmgr|INFO|br0<->unix#1047: 5 flow_mods in the last 0 s (5 adds)\n2020-04-05T17:45:30.688Z|00200|connmgr|INFO|br0<->unix#1050: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:45:33.809Z|00201|connmgr|INFO|br0<->unix#1055: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-05T17:45:33.845Z|00202|connmgr|INFO|br0<->unix#1058: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-05T17:45:33.904Z|00203|bridge|INFO|bridge br0: deleted interface veth5b6b8e2c on port 96\n2020-04-05 17:45:36 info: Saving flows ...\nTerminated\n
Apr 05 17:48:12.057 E ns/openshift-multus pod/multus-admission-controller-mpkb9 node/ip-10-0-153-86.us-west-2.compute.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 05 17:48:12.077 E ns/openshift-multus pod/multus-blbpr node/ip-10-0-153-86.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 05 17:48:12.102 E ns/openshift-machine-config-operator pod/machine-config-daemon-k8prm node/ip-10-0-153-86.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 05 17:48:12.114 E ns/openshift-machine-config-operator pod/machine-config-server-gzrfl node/ip-10-0-153-86.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0405 17:38:15.640408       1 start.go:38] Version: machine-config-daemon-4.5.0-202004051201-2-ga195251a-dirty (a195251a12c0e8f9d9994c2662d3eb31c0a50eb1)\nI0405 17:38:15.642150       1 api.go:51] Launching server on :22624\nI0405 17:38:15.642292       1 api.go:51] Launching server on :22623\n
Apr 05 17:48:17.861 E ns/openshift-etcd pod/etcd-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/etcd-metrics container exited with code 2 (Error): 153-86.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-153-86.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-05T17:20:54.657Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-05T17:20:54.657Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-153-86.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-153-86.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-05T17:20:54.666Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-05T17:20:54.666Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"warn","ts":"2020-04-05T17:20:54.667Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.153.86:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.153.86:9978: connect: connection refused\". Reconnecting..."}\n{"level":"info","ts":"2020-04-05T17:20:54.668Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-05T17:20:55.668Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.153.86:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.153.86:9978: connect: connection refused\". Reconnecting..."}\n
Apr 05 17:48:17.904 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/kube-apiserver container exited with code 1 (Error): 405 17:45:36.929866       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:45:36.929950       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:45:36.930020       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nE0405 17:45:36.930177       1 watcher.go:197] watch chan error: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field\nW0405 17:45:36.981175       1 reflector.go:402] storage/cacher.go:/k8s.cni.cncf.io/network-attachment-definitions: watch of k8s.cni.cncf.io/v1, Kind=NetworkAttachmentDefinition ended with: Internal error occurred: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field\nI0405 17:45:36.930188       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:45:36.930255       1 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick\nW0405 17:45:36.981329       1 reflector.go:402] storage/cacher.go:/horizontalpodautoscalers: watch of *autoscaling.HorizontalPodAutoscaler ended with: Internal error occurred: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field\nI0405 17:45:36.930264       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:45:36.930422       1 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick\nI0405 17:45:36.930432       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:45:36.935710       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0405 17:45:36.936640       1 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick\n
Apr 05 17:48:17.904 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): ller.go:42] All CertRotationController workers have been terminated\nI0405 17:45:36.772517       1 base_controller.go:52] Shutting down worker of CertRotationController controller ...\nI0405 17:45:36.774828       1 base_controller.go:42] All CertRotationController workers have been terminated\nI0405 17:45:36.772528       1 base_controller.go:52] Shutting down worker of CertRotationController controller ...\nI0405 17:45:36.774875       1 base_controller.go:42] All CertRotationController workers have been terminated\nI0405 17:45:36.772538       1 base_controller.go:52] Shutting down worker of CertRotationController controller ...\nI0405 17:45:36.774922       1 base_controller.go:42] All CertRotationController workers have been terminated\nI0405 17:45:36.772562       1 base_controller.go:52] Shutting down worker of CertRotationController controller ...\nI0405 17:45:36.774971       1 base_controller.go:42] All CertRotationController workers have been terminated\nI0405 17:45:36.772573       1 base_controller.go:52] Shutting down worker of CertRotationController controller ...\nI0405 17:45:36.775018       1 base_controller.go:42] All CertRotationController workers have been terminated\nI0405 17:45:36.772610       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\nI0405 17:45:36.772634       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\nI0405 17:45:36.772660       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\nI0405 17:45:36.772688       1 reflector.go:181] Stopping reflector *v1.Infrastructure (10m0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\nI0405 17:45:36.772713       1 reflector.go:181] Stopping reflector *v1.Network (10m0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\nI0405 17:45:36.772723       1 certrotationcontroller.go:544] Shutting down CertRotation\n
Apr 05 17:48:17.904 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0405 17:19:55.395596       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 05 17:48:17.904 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-86.us-west-2.compute.internal node/ip-10-0-153-86.us-west-2.compute.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0405 17:45:22.306246       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:45:22.306543       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0405 17:45:32.319823       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0405 17:45:32.320679       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 05 17:48:24.411 E ns/openshift-machine-config-operator pod/machine-config-daemon-k8prm node/ip-10-0-153-86.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 05 17:51:01.877 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator console is reporting a failure: RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com/health): Get https://console-openshift-console.apps.ci-op-c6lxz8nf-f83f1.origin-ci-int-aws.dev.rhcloud.com/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Apr 05 17:53:57.306 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: ip-10-0-142-68.us-west-2.compute.internal members are unhealthy,  members are unknown