ResultSUCCESS
Tests 3 failed / 22 succeeded
Started2020-02-28 17:36
Elapsed1h41m
Work namespaceci-op-v86zwsvl
Refs openshift-4.5:7108e9c1
36:168879bd
podc6a05411-5a50-11ea-ac36-0a58ac10ef63
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Cluster frontend ingress remain available 48m14s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sCluster\sfrontend\singress\sremain\savailable$'
Frontends were unreachable during disruption for at least 12s of 48m13s (0%):

Feb 28 18:33:40.986 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 28 18:33:40.987 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 28 18:33:41.757 E ns/openshift-console route/console Route is not responding to GET requests on reused connections
Feb 28 18:33:41.757 E ns/openshift-console route/console Route is not responding to GET requests over new connections
Feb 28 18:33:42.133 I ns/openshift-console route/console Route started responding to GET requests on reused connections
Feb 28 18:33:42.143 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 28 18:45:28.134 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 28 18:45:28.533 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 28 18:49:07.757 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 28 18:49:08.117 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 28 18:51:15.945 E ns/openshift-console route/console Route stopped responding to GET requests over new connections
Feb 28 18:51:16.756 - 4s    E ns/openshift-console route/console Route is not responding to GET requests over new connections
Feb 28 18:51:22.509 I ns/openshift-console route/console Route started responding to GET requests over new connections
Feb 28 18:57:08.757 E ns/openshift-console route/console Route stopped responding to GET requests on reused connections
Feb 28 18:57:09.106 I ns/openshift-console route/console Route started responding to GET requests on reused connections
				from junit_upgrade_1582916741.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 48m14s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 3s of 48m13s (0%):

Feb 28 18:49:18.419 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: unexpected EOF
Feb 28 18:49:18.419 E kube-apiserver Kube API started failing: Get https://api.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: unexpected EOF
Feb 28 18:49:19.366 E kube-apiserver Kube API is not responding to GET requests
Feb 28 18:49:19.366 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 28 18:49:19.691 I kube-apiserver Kube API started responding to GET requests
Feb 28 18:49:19.692 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1582916741.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 49m20s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
240 error level events were detected during this test run:

Feb 28 18:18:50.059 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-6c94f876f" has successfully progressed.
Feb 28 18:20:20.135 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-765986495c-x82t2 node/ip-10-0-141-183.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): lid bearer token, [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]]\nE0228 18:14:40.502591       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]]\nE0228 18:14:53.982880       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]]\nE0228 18:15:10.497820       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]]\nI0228 18:20:19.312991       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:20:19.313247       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0228 18:20:19.313335       1 base_controller.go:74] Shutting down InstallerController ...\nI0228 18:20:19.313362       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0228 18:20:19.313375       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0228 18:20:19.313392       1 base_controller.go:74] Shutting down NodeController ...\nI0228 18:20:19.313404       1 base_controller.go:74] Shutting down PruneController ...\nI0228 18:20:19.313416       1 base_controller.go:74] Shutting down  ...\nI0228 18:20:19.313428       1 status_controller.go:212] Shutting down StatusSyncer-kube-scheduler\nI0228 18:20:19.313442       1 target_config_reconciler.go:124] Shutting down TargetConfigReconciler\nI0228 18:20:19.313454       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0228 18:20:19.313462       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0228 18:20:19.313473       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nF0228 18:20:19.314341       1 builder.go:243] stopped\n
Feb 28 18:21:42.937 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-6985cdffcf-pcxx6 node/ip-10-0-141-183.us-west-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): Reference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"315742c5-191a-49c0-b58e-a7e5c2e1fefa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ObserveStorageUpdated' Updated storage urls to https://10.0.140.89:2379,https://10.0.141.183:2379,https://10.0.153.236:2379\nI0228 18:20:56.786063       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"315742c5-191a-49c0-b58e-a7e5c2e1fefa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ObserveStorageUpdated' Updated storage urls to https://10.0.140.89:2379,https://10.0.141.183:2379,https://10.0.153.236:2379\nI0228 18:21:26.779964       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"315742c5-191a-49c0-b58e-a7e5c2e1fefa", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ObserveStorageUpdated' Updated storage urls to https://10.0.140.89:2379,https://10.0.141.183:2379,https://10.0.153.236:2379\nI0228 18:21:41.881687       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:21:41.882063       1 finalizer_controller.go:148] Shutting down NamespaceFinalizerController_openshift-apiserver\nI0228 18:21:41.882167       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0228 18:21:41.882241       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0228 18:21:41.882288       1 state_controller.go:171] Shutting down EncryptionStateController\nI0228 18:21:41.882333       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0228 18:21:41.882364       1 prune_controller.go:232] Shutting down PruneController\nI0228 18:21:41.882403       1 condition_controller.go:202] Shutting down EncryptionConditionController\nF0228 18:21:41.882549       1 builder.go:210] server exited\n
Feb 28 18:21:53.416 E kube-apiserver Kube API started failing: Get https://api.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 28 18:22:17.944 E ns/openshift-machine-api pod/machine-api-operator-65f7f49b7b-mj44b node/ip-10-0-141-183.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 28 18:23:45.725 E kube-apiserver failed contacting the API: Get https://api.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=24029&timeout=9m18s&timeoutSeconds=558&watch=true: dial tcp 52.40.206.181:6443: connect: connection refused
Feb 28 18:23:45.726 E kube-apiserver failed contacting the API: Get https://api.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=24060&timeout=5m12s&timeoutSeconds=312&watch=true: dial tcp 52.40.206.181:6443: connect: connection refused
Feb 28 18:25:53.860 E ns/openshift-monitoring pod/cluster-monitoring-operator-6d6bc85657-jtw6d node/ip-10-0-153-236.us-west-2.compute.internal container=cluster-monitoring-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:00.716 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-5597d74574-8fjct node/ip-10-0-150-39.us-west-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:02.758 E ns/openshift-authentication pod/oauth-openshift-9c55475c7-c6bkc node/ip-10-0-141-183.us-west-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:03.875 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-7f58bbjl9 node/ip-10-0-141-183.us-west-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:03.921 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-85d48cd5b5-svlz8 node/ip-10-0-141-183.us-west-2.compute.internal container=operator container exited with code 255 (Error): ce bindings found, nothing to delete.\nI0228 18:01:29.431124       1 workload_controller.go:181] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0228 18:01:29.491173       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0228 18:01:39.496560       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0228 18:01:49.426928       1 workload_controller.go:329] No service bindings found, nothing to delete.\nI0228 18:01:49.430998       1 workload_controller.go:181] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0228 18:01:49.501131       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0228 18:01:59.552942       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0228 18:02:09.426703       1 workload_controller.go:329] No service bindings found, nothing to delete.\nI0228 18:02:09.430723       1 workload_controller.go:181] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0228 18:02:09.559398       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0228 18:02:13.745152       1 observer_polling.go:88] Observed change: file:/var/run/secrets/serving-cert/tls.crt (current: "26ea3f21668964c1acb2c7563f7dbf05fa62fb52de8f0e94e8e908858806d764", lastKnown: "")\nW0228 18:02:13.745185       1 builder.go:108] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was created\nI0228 18:02:13.745224       1 observer_polling.go:88] Observed change: file:/var/run/secrets/serving-cert/tls.key (current: "101ced076b9802d1f0544e69a1673b6fe3d578f2d82c5301e3288f69270666b5", lastKnown: "")\nF0228 18:02:13.745255       1 leaderelection.go:66] leaderelection lost\n
Feb 28 18:26:17.772 E ns/openshift-image-registry pod/image-registry-b494c8f4b-rcfxk node/ip-10-0-129-69.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:18.130 E ns/openshift-monitoring pod/prometheus-adapter-868957bd4c-v24pf node/ip-10-0-129-69.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0228 18:12:44.444283       1 adapter.go:93] successfully using in-cluster auth\nI0228 18:12:45.011761       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 28 18:26:18.937 E ns/openshift-image-registry pod/image-registry-b494c8f4b-cftbg node/ip-10-0-137-137.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:19.156 E ns/openshift-image-registry pod/image-registry-b494c8f4b-npsn5 node/ip-10-0-129-69.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:19.844 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-137.us-west-2.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:19.844 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-137.us-west-2.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:19.844 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-137.us-west-2.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:20.153 E ns/openshift-controller-manager pod/controller-manager-762k4 node/ip-10-0-140-89.us-west-2.compute.internal container=controller-manager container exited with code 137 (OOMKilled): I0228 18:07:04.424335       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0228 18:07:04.425617       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable-initial@sha256:a3d84f419db9032b07494e17bf5f6ee7a928c92e5c6ff959deef9dc128b865cc"\nI0228 18:07:04.425634       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable-initial@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0228 18:07:04.425697       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0228 18:07:04.425757       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nE0228 18:13:34.092569       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\nE0228 18:14:04.091140       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\nE0228 18:14:34.093269       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\nE0228 18:14:43.994530       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\nE0228 18:15:04.089774       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\nE0228 18:15:13.988322       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n
Feb 28 18:26:20.777 E ns/openshift-monitoring pod/kube-state-metrics-94c6867d7-rv4wr node/ip-10-0-150-39.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Feb 28 18:26:20.835 E ns/openshift-monitoring pod/openshift-state-metrics-d8bc67d4-m45lf node/ip-10-0-150-39.us-west-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 28 18:26:21.343 E ns/openshift-controller-manager pod/controller-manager-xnhzf node/ip-10-0-153-236.us-west-2.compute.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:22.231 E ns/openshift-service-ca-operator pod/service-ca-operator-7c858ccfd7-5jtvt node/ip-10-0-141-183.us-west-2.compute.internal container=operator container exited with code 255 (Error): 
Feb 28 18:26:24.788 E ns/openshift-monitoring pod/prometheus-adapter-868957bd4c-mtjt5 node/ip-10-0-150-39.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0228 18:12:45.437762       1 adapter.go:93] successfully using in-cluster auth\nI0228 18:12:46.489094       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 28 18:26:32.638 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-39.us-west-2.compute.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:32.638 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-39.us-west-2.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:32.638 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-39.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:32.638 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-39.us-west-2.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:32.638 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-39.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:32.638 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-39.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:32.638 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-39.us-west-2.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:35.745 E ns/openshift-monitoring pod/thanos-querier-86b79bbc9f-22dh8 node/ip-10-0-129-69.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/28 18:14:04 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/28 18:14:04 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/28 18:14:04 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/28 18:14:04 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/28 18:14:04 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/28 18:14:04 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/28 18:14:04 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/28 18:14:04 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/28 18:14:04 http.go:107: HTTPS: listening on [::]:9091\nI0228 18:14:04.210524       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 28 18:26:45.234 E ns/openshift-monitoring pod/node-exporter-5mtp5 node/ip-10-0-140-89.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -28T18:06:38Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-28T18:06:38Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 28 18:26:50.043 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-150-39.us-west-2.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:50.043 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-150-39.us-west-2.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:50.043 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-150-39.us-west-2.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:55.465 E ns/openshift-image-registry pod/node-ca-g2298 node/ip-10-0-153-236.us-west-2.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:26:57.012 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-39.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-28T18:26:51.692Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-28T18:26:51.697Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-28T18:26:51.697Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-28T18:26:51.698Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-28T18:26:51.698Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-28T18:26:51.698Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-28T18:26:51.698Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-28T18:26:51.698Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-28T18:26:51.698Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-28T18:26:51.698Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-28T18:26:51.698Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-28T18:26:51.698Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-28T18:26:51.698Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-28T18:26:51.698Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-28T18:26:51.699Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-28T18:26:51.699Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-28
Feb 28 18:26:59.813 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-69.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/28 18:14:24 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 28 18:26:59.813 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-69.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/28 18:14:25 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/28 18:14:25 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/28 18:14:25 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/28 18:14:25 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/28 18:14:25 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/28 18:14:25 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/28 18:14:25 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/28 18:14:25 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/28 18:14:25 http.go:107: HTTPS: listening on [::]:9091\nI0228 18:14:25.108637       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/28 18:26:12 oauthproxy.go:774: basicauth: 10.131.0.24:44852 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/28 18:26:18 oauthproxy.go:774: basicauth: 10.130.0.75:50018 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 28 18:26:59.813 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-69.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-28T18:14:24.421350279Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-02-28T18:14:24.423658818Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-28T18:14:29.565094957Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-02-28T18:14:29.565186089Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Feb 28 18:27:00.812 E ns/openshift-ingress pod/router-default-c4d89866b-5xcdx node/ip-10-0-137-137.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:26:04.817702       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:26:09.809489       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:26:15.315341       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:26:20.311960       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:26:25.419816       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:26:30.289349       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:26:35.306549       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:26:40.298443       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:26:45.292066       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:26:55.350318       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 28 18:27:10.503 E ns/openshift-console-operator pod/console-operator-7589bc699b-lrss9 node/ip-10-0-153-236.us-west-2.compute.internal container=console-operator container exited with code 255 (Error):  during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502104       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502111       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502117       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502122       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502128       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502133       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502139       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502144       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502149       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502156       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.502161       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:23:45.539040       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0228 18:23:54.062719       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 21; INTERNAL_ERROR") has prevented the request from succeeding\nI0228 18:27:09.887501       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:27:09.888247       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0228 18:27:09.888272       1 builder.go:210] server exited\n
Feb 28 18:27:25.990 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-69.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-28T18:27:15.763Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-28T18:27:15.768Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-28T18:27:15.769Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-28T18:27:15.770Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-28T18:27:15.770Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-28T18:27:15.770Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-28T18:27:15.770Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-28T18:27:15.770Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-28T18:27:15.770Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-28T18:27:15.770Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-28T18:27:15.770Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-28T18:27:15.770Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-28T18:27:15.770Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-28T18:27:15.770Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-28T18:27:15.771Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-28T18:27:15.771Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-28
Feb 28 18:27:28.367 E ns/openshift-controller-manager pod/controller-manager-txsdm node/ip-10-0-140-89.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): I0228 18:26:30.330586       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0228 18:26:30.332098       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable@sha256:a3d84f419db9032b07494e17bf5f6ee7a928c92e5c6ff959deef9dc128b865cc"\nI0228 18:26:30.332112       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0228 18:26:30.332183       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nI0228 18:26:30.332251       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\n
Feb 28 18:27:28.508 E ns/openshift-controller-manager pod/controller-manager-jr558 node/ip-10-0-141-183.us-west-2.compute.internal container=controller-manager container exited with code 137 (OOMKilled): I0228 18:26:30.525077       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0228 18:26:30.527302       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0228 18:26:30.527237       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable@sha256:a3d84f419db9032b07494e17bf5f6ee7a928c92e5c6ff959deef9dc128b865cc"\nI0228 18:26:30.528753       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0228 18:26:30.528907       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 28 18:27:28.554 E ns/openshift-controller-manager pod/controller-manager-cvl9m node/ip-10-0-153-236.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): I0228 18:26:29.886574       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0228 18:26:29.887961       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable@sha256:a3d84f419db9032b07494e17bf5f6ee7a928c92e5c6ff959deef9dc128b865cc"\nI0228 18:26:29.887979       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0228 18:26:29.888065       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nI0228 18:26:29.888065       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\n
Feb 28 18:27:33.017 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-d9767667c-5nn9n node/ip-10-0-129-69.us-west-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 28 18:27:38.035 E ns/openshift-monitoring pod/node-exporter-cggkw node/ip-10-0-129-69.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:26:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:26:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:26:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:26:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:26:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:27:11Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:27:26Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 28 18:27:49.578 E ns/openshift-console pod/console-5589f4dff9-lp59n node/ip-10-0-141-183.us-west-2.compute.internal container=console container exited with code 2 (Error): 28T18:08:24Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-28T18:08:34Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-28T18:08:44Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-28T18:08:54Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-28T18:09:04Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-28T18:09:14Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-28T18:09:24Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-28T18:09:34Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-28T18:09:44Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-28T18:09:54Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-02-28T18:10:04Z cmd/main: Binding to [::]:8443...\n2020-02-28T18:10:04Z cmd/main: using TLS\n
Feb 28 18:33:35.498 E ns/openshift-sdn pod/sdn-controller-9j47d node/ip-10-0-153-236.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0228 17:57:29.541588       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0228 18:02:59.714137       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: leader changed\nE0228 18:04:40.355867       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\nE0228 18:15:23.297314       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: configmaps "openshift-network-controller" is forbidden: User "system:serviceaccount:openshift-sdn:sdn-controller" cannot get resource "configmaps" in API group "" in the namespace "openshift-sdn"\n
Feb 28 18:33:38.374 E ns/openshift-sdn pod/sdn-g29lq node/ip-10-0-140-89.us-west-2.compute.internal container=sdn container exited with code 255 (Error): vents\nI0228 18:30:50.635124    2645 proxier.go:347] userspace syncProxyRules took 66.917736ms\nI0228 18:31:20.867921    2645 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:31:20.867938    2645 proxier.go:347] userspace syncProxyRules took 72.074698ms\nI0228 18:31:51.092982    2645 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:31:51.093000    2645 proxier.go:347] userspace syncProxyRules took 65.130354ms\nI0228 18:32:21.338007    2645 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:32:21.338026    2645 proxier.go:347] userspace syncProxyRules took 65.183084ms\nI0228 18:32:51.559537    2645 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:32:51.559553    2645 proxier.go:347] userspace syncProxyRules took 62.882991ms\nI0228 18:33:21.773571    2645 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:33:21.773587    2645 proxier.go:347] userspace syncProxyRules took 61.981039ms\nI0228 18:33:25.508457    2645 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.2:6443 10.130.0.18:6443]\nI0228 18:33:25.508559    2645 roundrobin.go:217] Delete endpoint 10.128.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0228 18:33:25.750601    2645 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:33:25.750619    2645 proxier.go:347] userspace syncProxyRules took 62.942251ms\nI0228 18:33:31.881389    2645 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nE0228 18:33:31.881415    2645 metrics.go:133] failed to dump OVS flows for metrics: exit status 1\nI0228 18:33:33.804089    2645 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0228 18:33:37.703967    2645 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 28 18:33:52.112 E ns/openshift-sdn pod/sdn-controller-rr2vj node/ip-10-0-140-89.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): 69.us-west-2.compute.internal (host: "ip-10-0-129-69.us-west-2.compute.internal", ip: "10.0.129.69", subnet: "10.129.2.0/23")\nE0228 18:15:17.311011       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.HostSubnet: Get https://api-int.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/hostsubnets?allowWatchBookmarks=true&resourceVersion=12941&timeout=9m0s&timeoutSeconds=540&watch=true: dial tcp 10.0.155.91:6443: connect: connection refused\nI0228 18:16:27.040243       1 vnids.go:115] Allocated netid 6393614 for namespace "e2e-control-plane-available-353"\nI0228 18:16:27.051297       1 vnids.go:115] Allocated netid 5251810 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-7157"\nI0228 18:16:27.060540       1 vnids.go:115] Allocated netid 13256251 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-1329"\nI0228 18:16:27.067337       1 vnids.go:115] Allocated netid 13298545 for namespace "e2e-k8s-sig-apps-job-upgrade-1623"\nI0228 18:16:27.085183       1 vnids.go:115] Allocated netid 2242000 for namespace "e2e-k8s-sig-apps-deployment-upgrade-9626"\nI0228 18:16:27.100640       1 vnids.go:115] Allocated netid 973832 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-9476"\nI0228 18:16:27.133415       1 vnids.go:115] Allocated netid 9483454 for namespace "e2e-frontend-ingress-available-7744"\nI0228 18:16:27.145622       1 vnids.go:115] Allocated netid 15383239 for namespace "e2e-k8s-service-lb-available-1349"\nI0228 18:16:27.154143       1 vnids.go:115] Allocated netid 16138761 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-4604"\nE0228 18:23:45.470206       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://api-int.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=23793&timeout=8m31s&timeoutSeconds=511&watch=true: dial tcp 10.0.155.91:6443: connect: connection refused\n
Feb 28 18:33:55.948 E ns/openshift-multus pod/multus-77g9s node/ip-10-0-150-39.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 28 18:33:56.122 E ns/openshift-multus pod/multus-admission-controller-g9hrn node/ip-10-0-140-89.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 28 18:34:02.639 E ns/openshift-sdn pod/sdn-tk4qx node/ip-10-0-137-137.us-west-2.compute.internal container=sdn container exited with code 255 (Error): -default:http" at 172.30.135.146:80/TCP\nI0228 18:33:47.733794   12259 service.go:363] Adding new service port "openshift-ingress/router-default:https" at 172.30.135.146:443/TCP\nI0228 18:33:47.733815   12259 service.go:363] Adding new service port "openshift-multus/multus-admission-controller:" at 172.30.108.13:443/TCP\nI0228 18:33:47.733844   12259 service.go:363] Adding new service port "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:" at 172.30.242.114:443/TCP\nI0228 18:33:47.733865   12259 service.go:363] Adding new service port "openshift-operator-lifecycle-manager/olm-operator-metrics:https-metrics" at 172.30.2.23:8081/TCP\nI0228 18:33:47.734176   12259 proxier.go:766] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0228 18:33:47.882971   12259 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:33:47.883000   12259 proxier.go:347] userspace syncProxyRules took 150.127604ms\nI0228 18:33:47.965353   12259 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-1349/service-test:" (:32483/tcp)\nI0228 18:33:47.965817   12259 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:32293/tcp)\nI0228 18:33:47.965920   12259 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:31390/tcp)\nI0228 18:33:48.000386   12259 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30476\nI0228 18:33:48.007445   12259 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0228 18:33:48.007474   12259 cmd.go:173] openshift-sdn network plugin registering startup\nI0228 18:33:48.007583   12259 cmd.go:177] openshift-sdn network plugin ready\nI0228 18:34:02.522209   12259 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0228 18:34:02.522254   12259 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 28 18:34:40.143 E ns/openshift-sdn pod/sdn-cw5hp node/ip-10-0-150-39.us-west-2.compute.internal container=sdn container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:34:42.730 E ns/openshift-multus pod/multus-mzrzz node/ip-10-0-137-137.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 28 18:34:56.726 E ns/openshift-sdn pod/sdn-bdgbl node/ip-10-0-153-236.us-west-2.compute.internal container=sdn container exited with code 255 (Error): 18310 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:32293/tcp)\nI0228 18:34:12.809165   18310 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-1349/service-test:" (:32483/tcp)\nI0228 18:34:12.809278   18310 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:31390/tcp)\nI0228 18:34:12.848603   18310 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30476\nI0228 18:34:12.855156   18310 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0228 18:34:12.855184   18310 cmd.go:173] openshift-sdn network plugin registering startup\nI0228 18:34:12.855269   18310 cmd.go:177] openshift-sdn network plugin ready\nI0228 18:34:13.152247   18310 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.56:6443 10.129.0.2:6443 10.130.0.18:6443]\nI0228 18:34:13.161602   18310 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.56:6443 10.130.0.18:6443]\nI0228 18:34:13.161629   18310 roundrobin.go:217] Delete endpoint 10.129.0.2:6443 for service "openshift-multus/multus-admission-controller:"\nI0228 18:34:13.389842   18310 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:34:13.389862   18310 proxier.go:347] userspace syncProxyRules took 63.239624ms\nI0228 18:34:13.618908   18310 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:34:13.618929   18310 proxier.go:347] userspace syncProxyRules took 75.290153ms\nI0228 18:34:43.515918   18310 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-p7m46\nI0228 18:34:43.899885   18310 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:34:43.899904   18310 proxier.go:347] userspace syncProxyRules took 67.133617ms\nF0228 18:34:56.270825   18310 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 28 18:35:21.382 E ns/openshift-multus pod/multus-7bwll node/ip-10-0-140-89.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 28 18:35:22.192 E ns/openshift-sdn pod/sdn-ljxs8 node/ip-10-0-150-39.us-west-2.compute.internal container=sdn container exited with code 255 (Error): vice-lb-available-1349/service-test:" (:32483/tcp)\nI0228 18:34:46.913662   33432 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:32293/tcp)\nI0228 18:34:46.913917   33432 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:31390/tcp)\nI0228 18:34:46.949041   33432 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30476\nI0228 18:34:46.956274   33432 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0228 18:34:46.956303   33432 cmd.go:173] openshift-sdn network plugin registering startup\nI0228 18:34:46.956427   33432 cmd.go:177] openshift-sdn network plugin ready\nI0228 18:35:04.720723   33432 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.56:6443 10.129.0.64:6443 10.130.0.18:6443]\nI0228 18:35:04.731458   33432 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.56:6443 10.129.0.64:6443]\nI0228 18:35:04.731489   33432 roundrobin.go:217] Delete endpoint 10.130.0.18:6443 for service "openshift-multus/multus-admission-controller:"\nI0228 18:35:04.967505   33432 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:35:04.967531   33432 proxier.go:347] userspace syncProxyRules took 71.419316ms\nI0228 18:35:05.202928   33432 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:35:05.202954   33432 proxier.go:347] userspace syncProxyRules took 69.853726ms\nI0228 18:35:16.195534   33432 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0228 18:35:22.012700   33432 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0228 18:35:22.012739   33432 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 28 18:35:39.150 E ns/openshift-sdn pod/sdn-mzmhl node/ip-10-0-129-69.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ess/router-default:http" (:32293/tcp)\nI0228 18:35:03.780992   35530 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:31390/tcp)\nI0228 18:35:03.781276   35530 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-1349/service-test:" (:32483/tcp)\nI0228 18:35:03.826733   35530 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30476\nI0228 18:35:04.003719   35530 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0228 18:35:04.003752   35530 cmd.go:173] openshift-sdn network plugin registering startup\nI0228 18:35:04.003874   35530 cmd.go:177] openshift-sdn network plugin ready\nI0228 18:35:04.721511   35530 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.56:6443 10.129.0.64:6443 10.130.0.18:6443]\nI0228 18:35:04.730861   35530 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.56:6443 10.129.0.64:6443]\nI0228 18:35:04.730896   35530 roundrobin.go:217] Delete endpoint 10.130.0.18:6443 for service "openshift-multus/multus-admission-controller:"\nI0228 18:35:04.977776   35530 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:35:04.977799   35530 proxier.go:347] userspace syncProxyRules took 70.12275ms\nI0228 18:35:05.217479   35530 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:35:05.217498   35530 proxier.go:347] userspace syncProxyRules took 69.861545ms\nI0228 18:35:35.470414   35530 proxier.go:368] userspace proxy: processing 0 service events\nI0228 18:35:35.470441   35530 proxier.go:347] userspace syncProxyRules took 75.676487ms\nI0228 18:35:38.988161   35530 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0228 18:35:38.988206   35530 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 28 18:36:06.200 E ns/openshift-multus pod/multus-rs4x2 node/ip-10-0-129-69.us-west-2.compute.internal container=kube-multus container exited with code 137 (OOMKilled): 
Feb 28 18:36:51.331 E ns/openshift-multus pod/multus-zkl55 node/ip-10-0-141-183.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 28 18:37:35.123 E ns/openshift-multus pod/multus-rtl7w node/ip-10-0-153-236.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 28 18:39:53.923 E ns/openshift-machine-config-operator pod/machine-config-operator-58c96bc4c4-8pv9t node/ip-10-0-141-183.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): e:"", Name:"machine-config", UID:"1abc98cb-1ee7-4170-86e6-c0dbeb5fb3a4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-02-28-173655}]\nE0228 18:01:05.607969       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0228 18:01:05.614328       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0228 18:01:06.637542       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0228 18:01:10.409312       1 sync.go:61] [init mode] synced RenderConfig in 5.35894376s\nI0228 18:01:10.598471       1 sync.go:61] [init mode] synced MachineConfigPools in 189.073167ms\nI0228 18:02:02.281570       1 sync.go:61] [init mode] synced MachineConfigDaemon in 51.683064048s\nI0228 18:02:07.326283       1 sync.go:61] [init mode] synced MachineConfigController in 5.044666073s\nI0228 18:02:14.376095       1 sync.go:61] [init mode] synced MachineConfigServer in 7.049775783s\nI0228 18:02:32.382150       1 sync.go:61] [init mode] synced RequiredPools in 18.00602308s\nI0228 18:02:32.403695       1 sync.go:85] Initialization complete\nE0228 18:04:40.390499       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Feb 28 18:41:48.964 E ns/openshift-machine-config-operator pod/machine-config-daemon-sbmds node/ip-10-0-129-69.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:41:57.590 E ns/openshift-machine-config-operator pod/machine-config-daemon-vdgd4 node/ip-10-0-137-137.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:42:36.566 E ns/openshift-machine-config-operator pod/machine-config-daemon-856vm node/ip-10-0-140-89.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:42:58.531 E ns/openshift-machine-config-operator pod/machine-config-daemon-w2978 node/ip-10-0-141-183.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:43:06.145 E ns/openshift-machine-config-operator pod/machine-config-daemon-mr4lk node/ip-10-0-150-39.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:43:21.001 E ns/openshift-machine-config-operator pod/machine-config-controller-5f99d56cf4-4lst4 node/ip-10-0-153-236.us-west-2.compute.internal container=machine-config-controller container exited with code 2 (Error): ernal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-cf436499e683389fc6faf3f5f7d3f649\nI0228 18:07:37.304948       1 node_controller.go:452] Pool worker: node ip-10-0-129-69.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0228 18:33:34.846979       1 node_controller.go:433] Pool worker: node ip-10-0-150-39.us-west-2.compute.internal is now reporting unready: node ip-10-0-150-39.us-west-2.compute.internal is reporting NotReady=False\nI0228 18:34:14.868621       1 node_controller.go:435] Pool worker: node ip-10-0-150-39.us-west-2.compute.internal is now reporting ready\nI0228 18:35:01.721854       1 node_controller.go:433] Pool master: node ip-10-0-140-89.us-west-2.compute.internal is now reporting unready: node ip-10-0-140-89.us-west-2.compute.internal is reporting NotReady=False\nI0228 18:35:41.745444       1 node_controller.go:435] Pool master: node ip-10-0-140-89.us-west-2.compute.internal is now reporting ready\nI0228 18:35:47.179237       1 node_controller.go:433] Pool worker: node ip-10-0-129-69.us-west-2.compute.internal is now reporting unready: node ip-10-0-129-69.us-west-2.compute.internal is reporting NotReady=False\nI0228 18:35:57.192981       1 node_controller.go:435] Pool worker: node ip-10-0-129-69.us-west-2.compute.internal is now reporting ready\nI0228 18:36:31.904691       1 node_controller.go:433] Pool master: node ip-10-0-141-183.us-west-2.compute.internal is now reporting unready: node ip-10-0-141-183.us-west-2.compute.internal is reporting NotReady=False\nI0228 18:37:11.928128       1 node_controller.go:435] Pool master: node ip-10-0-141-183.us-west-2.compute.internal is now reporting ready\nI0228 18:37:12.571068       1 node_controller.go:433] Pool master: node ip-10-0-153-236.us-west-2.compute.internal is now reporting unready: node ip-10-0-153-236.us-west-2.compute.internal is reporting NotReady=False\nI0228 18:38:02.596515       1 node_controller.go:435] Pool master: node ip-10-0-153-236.us-west-2.compute.internal is now reporting ready\n
Feb 28 18:45:05.233 E ns/openshift-machine-config-operator pod/machine-config-server-6hrlp node/ip-10-0-153-236.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0228 18:02:13.779256       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-310-g1f107922-dirty (1f1079225b3a5464455047cdfa2af4d471da3597)\nI0228 18:02:13.780260       1 api.go:51] Launching server on :22624\nI0228 18:02:13.780320       1 api.go:51] Launching server on :22623\n
Feb 28 18:45:15.031 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-5dc99675bd-fzmsd node/ip-10-0-137-137.us-west-2.compute.internal container=operator container exited with code 255 (Error): 020-02-28 18:27:19.48155405 +0000 UTC m=+81.139855072\nI0228 18:27:20.077841       1 operator.go:147] Finished syncing operator at 596.279175ms\nI0228 18:27:20.077893       1 operator.go:145] Starting syncing operator at 2020-02-28 18:27:20.077886743 +0000 UTC m=+81.736187867\nI0228 18:27:20.675804       1 operator.go:147] Finished syncing operator at 597.907723ms\nI0228 18:27:32.120152       1 operator.go:145] Starting syncing operator at 2020-02-28 18:27:32.120138435 +0000 UTC m=+93.778439436\nI0228 18:27:32.144696       1 operator.go:147] Finished syncing operator at 24.549732ms\nI0228 18:27:32.144755       1 operator.go:145] Starting syncing operator at 2020-02-28 18:27:32.144749253 +0000 UTC m=+93.803050268\nI0228 18:27:32.144990       1 status_controller.go:176] clusteroperator/csi-snapshot-controller diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-28T18:07:10Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-28T18:27:32Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-28T18:07:29Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-28T18:07:14Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0228 18:27:32.157729       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-csi-snapshot-controller-operator", Name:"csi-snapshot-controller-operator", UID:"e8e15552-eeee-40a8-9c48-c37d4d609dac", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("")\nI0228 18:27:32.167007       1 operator.go:147] Finished syncing operator at 22.251856ms\nI0228 18:45:13.952743       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:45:13.953181       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0228 18:45:13.953219       1 builder.go:210] server exited\n
Feb 28 18:45:15.998 E ns/openshift-etcd-operator pod/etcd-operator-6495d65db5-tq5qk node/ip-10-0-140-89.us-west-2.compute.internal container=operator container exited with code 255 (Error): 228 18:45:14.885239       1 base_controller.go:74] Shutting down NodeController ...\nI0228 18:45:14.885247       1 host_endpoints_controller.go:263] Shutting down HostEtcdEndpointsController\nI0228 18:45:14.885256       1 status_controller.go:212] Shutting down StatusSyncer-etcd\nI0228 18:45:14.885265       1 scriptcontroller.go:144] Shutting down ScriptControllerController\nI0228 18:45:14.885277       1 base_controller.go:74] Shutting down  ...\nI0228 18:45:14.885321       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0228 18:45:14.885327       1 base_controller.go:39] All RevisionController workers have been terminated\nI0228 18:45:14.885343       1 base_controller.go:49] Shutting down worker of  controller ...\nI0228 18:45:14.885347       1 base_controller.go:39] All  workers have been terminated\nI0228 18:45:14.885363       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0228 18:45:14.885368       1 base_controller.go:39] All PruneController workers have been terminated\nI0228 18:45:14.885380       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0228 18:45:14.885384       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0228 18:45:14.885394       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0228 18:45:14.885398       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0228 18:45:14.885409       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0228 18:45:14.885413       1 base_controller.go:39] All NodeController workers have been terminated\nI0228 18:45:14.885457       1 base_controller.go:49] Shutting down worker of  controller ...\nI0228 18:45:14.885466       1 base_controller.go:39] All  workers have been terminated\nI0228 18:45:14.885561       1 etcdmemberscontroller.go:192] Shutting down EtcdMembersController\nF0228 18:45:14.885884       1 builder.go:243] stopped\n
Feb 28 18:45:16.135 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-137.us-west-2.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:16.135 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-137.us-west-2.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:16.135 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-137-137.us-west-2.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:16.192 E ns/openshift-console pod/console-7fc78c9686-mm5w9 node/ip-10-0-140-89.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020-02-28T18:27:33Z cmd/main: cookies are secure!\n2020-02-28T18:27:33Z cmd/main: Binding to [::]:8443...\n2020-02-28T18:27:33Z cmd/main: using TLS\n
Feb 28 18:45:16.208 E ns/openshift-kube-storage-version-migrator pod/migrator-754df6656d-t56v5 node/ip-10-0-137-137.us-west-2.compute.internal container=migrator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:16.370 E ns/openshift-console pod/downloads-654777d558-c2f6r node/ip-10-0-137-137.us-west-2.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:16.403 E ns/openshift-monitoring pod/kube-state-metrics-546956b6d-h2h4w node/ip-10-0-137-137.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Feb 28 18:45:16.452 E ns/openshift-monitoring pod/openshift-state-metrics-5794575dc9-7d4jk node/ip-10-0-137-137.us-west-2.compute.internal container=openshift-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:16.452 E ns/openshift-monitoring pod/openshift-state-metrics-5794575dc9-7d4jk node/ip-10-0-137-137.us-west-2.compute.internal container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:16.452 E ns/openshift-monitoring pod/openshift-state-metrics-5794575dc9-7d4jk node/ip-10-0-137-137.us-west-2.compute.internal container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:17.091 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5bbc95b4f4-gkhts node/ip-10-0-140-89.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): y: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "NodeControllerDegraded: All master nodes are ready"\nI0228 18:37:12.621067       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"a4c22f92-bf8c-4ada-ae26-3e5287a932d3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-153-236.us-west-2.compute.internal\" not ready since 2020-02-28 18:37:12 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)"\nI0228 18:38:02.673353       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"a4c22f92-bf8c-4ada-ae26-3e5287a932d3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-153-236.us-west-2.compute.internal\" not ready since 2020-02-28 18:37:12 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)" to "NodeControllerDegraded: All master nodes are ready"\nI0228 18:45:16.033276       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:45:16.033656       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nF0228 18:45:16.033719       1 builder.go:209] server exited\n
Feb 28 18:45:19.090 E ns/openshift-machine-config-operator pod/machine-config-server-jxswh node/ip-10-0-141-183.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0228 18:02:08.140814       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-310-g1f107922-dirty (1f1079225b3a5464455047cdfa2af4d471da3597)\nI0228 18:02:08.141545       1 api.go:51] Launching server on :22624\nI0228 18:02:08.141675       1 api.go:51] Launching server on :22623\nI0228 18:03:07.588638       1 api.go:97] Pool worker requested by 10.0.155.91:30117\nI0228 18:03:53.436870       1 api.go:97] Pool worker requested by 10.0.155.91:29627\n
Feb 28 18:45:21.140 E ns/openshift-machine-api pod/machine-api-operator-5d4c5b7454-9dhv5 node/ip-10-0-140-89.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:21.140 E ns/openshift-machine-api pod/machine-api-operator-5d4c5b7454-9dhv5 node/ip-10-0-140-89.us-west-2.compute.internal container=machine-api-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:22.091 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-65cc776876-tm4dt node/ip-10-0-140-89.us-west-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): ated' Updated storage urls to https://10.0.140.89:2379,https://10.0.141.183:2379,https://10.0.153.236:2379\nI0228 18:45:20.992030       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:45:20.992790       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0228 18:45:20.992857       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0228 18:45:20.992914       1 state_controller.go:171] Shutting down EncryptionStateController\nI0228 18:45:20.992958       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0228 18:45:20.992976       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0228 18:45:20.992987       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0228 18:45:20.992999       1 finalizer_controller.go:148] Shutting down NamespaceFinalizerController_openshift-apiserver\nI0228 18:45:20.993010       1 prune_controller.go:232] Shutting down PruneController\nI0228 18:45:20.993026       1 base_controller.go:73] Shutting down  ...\nI0228 18:45:20.993042       1 base_controller.go:73] Shutting down LoggingSyncer ...\nI0228 18:45:20.993053       1 base_controller.go:73] Shutting down UnsupportedConfigOverridesController ...\nI0228 18:45:20.993064       1 status_controller.go:212] Shutting down StatusSyncer-openshift-apiserver\nI0228 18:45:20.993074       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0228 18:45:20.993082       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0228 18:45:20.993097       1 base_controller.go:73] Shutting down RevisionController ...\nI0228 18:45:20.993120       1 workload_controller.go:195] Shutting down OpenShiftAPIServerOperator\nI0228 18:45:20.993142       1 apiservice_controller.go:215] Shutting down APIServiceController_openshift-apiserver\nF0228 18:45:20.993317       1 builder.go:243] stopped\nI0228 18:45:20.997212       1 base_controller.go:48] Shutting down worker of UnsupportedConfigOverridesController controller ...\n
Feb 28 18:45:23.476 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-84b7d47997-s9fmb node/ip-10-0-129-69.us-west-2.compute.internal container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:45:44.163 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Feb 28 18:45:47.483 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-f96d98665-p942r node/ip-10-0-150-39.us-west-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 28 18:47:49.374 E ns/openshift-monitoring pod/node-exporter-6cd2b node/ip-10-0-137-137.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:45:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:45:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:45:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:45:39Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:45:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:45:54Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:46:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 28 18:47:49.400 E ns/openshift-cluster-node-tuning-operator pod/tuned-9j8nf node/ip-10-0-137-137.us-west-2.compute.internal container=tuned container exited with code 143 (Error): ecommended profile (openshift-node)\nI0228 18:26:48.795522    1088 tuned.go:286] starting tuned...\n2020-02-28 18:26:48,909 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-28 18:26:48,915 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-28 18:26:48,916 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-28 18:26:48,917 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-28 18:26:48,918 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-28 18:26:48,961 INFO     tuned.daemon.controller: starting controller\n2020-02-28 18:26:48,961 INFO     tuned.daemon.daemon: starting tuning\n2020-02-28 18:26:48,976 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-28 18:26:48,977 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-28 18:26:48,980 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-28 18:26:48,982 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-28 18:26:48,984 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-28 18:26:49,097 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-28 18:26:49,107 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0228 18:45:30.031135    1088 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.031154    1088 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0228 18:45:30.039325    1088 reflector.go:340] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: watch of *v1.Tuned ended with: very short watch: github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: Unexpected watch close - watch lasted less than a second and no items received\n
Feb 28 18:47:49.456 E ns/openshift-sdn pod/ovs-w6pdn node/ip-10-0-137-137.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): w_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:15.102Z|00085|connmgr|INFO|br0<->unix#564: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:15.153Z|00086|bridge|INFO|bridge br0: deleted interface vethc217eea6 on port 22\n2020-02-28T18:45:15.206Z|00087|connmgr|INFO|br0<->unix#567: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:15.254Z|00088|connmgr|INFO|br0<->unix#570: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:15.280Z|00089|bridge|INFO|bridge br0: deleted interface vethfbb3d604 on port 20\n2020-02-28T18:45:15.338Z|00090|connmgr|INFO|br0<->unix#573: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:15.385Z|00091|connmgr|INFO|br0<->unix#576: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:15.410Z|00092|bridge|INFO|bridge br0: deleted interface veth68222083 on port 24\n2020-02-28T18:45:15.450Z|00093|connmgr|INFO|br0<->unix#579: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:15.487Z|00094|connmgr|INFO|br0<->unix#582: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:15.519Z|00095|bridge|INFO|bridge br0: deleted interface vethd2e93d62 on port 21\n2020-02-28T18:45:44.258Z|00096|connmgr|INFO|br0<->unix#606: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:44.286Z|00097|connmgr|INFO|br0<->unix#609: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:44.310Z|00098|bridge|INFO|bridge br0: deleted interface veth1df5d66d on port 11\n2020-02-28T18:45:44.524Z|00099|connmgr|INFO|br0<->unix#612: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:44.551Z|00100|connmgr|INFO|br0<->unix#615: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:44.572Z|00101|bridge|INFO|bridge br0: deleted interface vethce713671 on port 12\n2020-02-28T18:45:59.499Z|00102|connmgr|INFO|br0<->unix#627: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:59.525Z|00103|connmgr|INFO|br0<->unix#630: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:59.547Z|00104|bridge|INFO|bridge br0: deleted interface veth7c389f6b on port 15\ninfo: Saving flows ...\nTerminated\n
Feb 28 18:47:49.456 E ns/openshift-multus pod/multus-86zqd node/ip-10-0-137-137.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 28 18:47:49.470 E ns/openshift-machine-config-operator pod/machine-config-daemon-jrx6s node/ip-10-0-137-137.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:47:56.317 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Feb 28 18:47:59.306 E ns/openshift-machine-config-operator pod/machine-config-daemon-jrx6s node/ip-10-0-137-137.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 28 18:48:07.097 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-69.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/28 18:27:23 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 28 18:48:07.097 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-69.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/28 18:27:24 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/28 18:27:24 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/28 18:27:24 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/28 18:27:24 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/28 18:27:24 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/28 18:27:24 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/28 18:27:24 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/28 18:27:24 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/28 18:27:24 http.go:107: HTTPS: listening on [::]:9091\nI0228 18:27:24.334744       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/28 18:30:43 oauthproxy.go:774: basicauth: 10.131.0.24:50350 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/28 18:35:24 oauthproxy.go:774: basicauth: 10.131.0.24:55152 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/28 18:39:54 oauthproxy.go:774: basicauth: 10.131.0.24:59462 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/28 18:41:24 oauthproxy.go:774: basicauth: 10.130.0.75:59000 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/28 18:44:2
Feb 28 18:48:07.097 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-129-69.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-28T18:27:19.933831208Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-02-28T18:27:19.93681515Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-02-28T18:27:24.935608568Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-28T18:27:30.125793446Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-02-28T18:27:30.125885909Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Feb 28 18:48:07.591 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-89.us-west-2.compute.internal node/ip-10-0-140-89.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): ver/apiserver-765c447484-595cs: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0228 18:45:21.387441       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-765c447484-595cs: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0228 18:45:23.344108       1 scheduler.go:751] pod openshift-csi-snapshot-controller/csi-snapshot-controller-84b7d47997-s9fmb is bound successfully on node "ip-10-0-129-69.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0228 18:45:23.449826       1 scheduler.go:751] pod openshift-csi-snapshot-controller/csi-snapshot-controller-565479b79d-nhxlw is bound successfully on node "ip-10-0-129-69.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0228 18:45:26.829561       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-765c447484-595cs: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0228 18:45:26.838098       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-fd49c7ddb-5sw7v: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0228 18:45:26.856270       1 scheduler.go:751] pod openshift-machine-config-operator/machine-config-server-dj7h9 is bound successfully on node "ip-10-0-141-183.us-west-2.compute.internal", 6 nodes evaluated, 1 nodes were found feasible.\nI0228 18:45:29.278769       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-765c447484-595cs: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Feb 28 18:48:07.607 E ns/openshift-machine-config-operator pod/machine-config-server-d5wns node/ip-10-0-140-89.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0228 18:02:10.773352       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-310-g1f107922-dirty (1f1079225b3a5464455047cdfa2af4d471da3597)\nI0228 18:02:10.773982       1 api.go:51] Launching server on :22624\nI0228 18:02:10.774022       1 api.go:51] Launching server on :22623\nI0228 18:03:53.719884       1 api.go:97] Pool worker requested by 10.0.134.189:39864\n
Feb 28 18:48:07.660 E ns/openshift-cluster-node-tuning-operator pod/tuned-rhvg5 node/ip-10-0-140-89.us-west-2.compute.internal container=tuned container exited with code 143 (Error): tuned "rendered" added\nI0228 18:27:01.690652    1446 tuned.go:219] extracting tuned profiles\nI0228 18:27:01.694371    1446 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0228 18:27:02.681657    1446 tuned.go:393] getting recommended profile...\nI0228 18:27:02.799593    1446 tuned.go:421] active profile () != recommended profile (openshift-control-plane)\nI0228 18:27:02.799645    1446 tuned.go:286] starting tuned...\n2020-02-28 18:27:02,895 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-28 18:27:02,901 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-28 18:27:02,902 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-28 18:27:02,902 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-28 18:27:02,903 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-28 18:27:02,935 INFO     tuned.daemon.controller: starting controller\n2020-02-28 18:27:02,936 INFO     tuned.daemon.daemon: starting tuning\n2020-02-28 18:27:02,944 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-28 18:27:02,945 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-28 18:27:02,948 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-28 18:27:02,950 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-02-28 18:27:02,951 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-02-28 18:27:03,013 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-28 18:27:03,023 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n2020-02-28 18:45:29,827 INFO     tuned.daemon.controller: terminating controller\n2020-02-28 18:45:29,827 INFO     tuned.daemon.daemon: stopping tuning\n
Feb 28 18:48:07.674 E ns/openshift-monitoring pod/node-exporter-m2v7n node/ip-10-0-140-89.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -28T18:27:02Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-28T18:27:02Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 28 18:48:07.684 E ns/openshift-controller-manager pod/controller-manager-mh6qt node/ip-10-0-140-89.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): I0228 18:27:34.396606       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0228 18:27:34.397834       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable@sha256:a3d84f419db9032b07494e17bf5f6ee7a928c92e5c6ff959deef9dc128b865cc"\nI0228 18:27:34.397850       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0228 18:27:34.397942       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0228 18:27:34.397978       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 28 18:48:07.695 E ns/openshift-multus pod/multus-admission-controller-9h27f node/ip-10-0-140-89.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 28 18:48:07.707 E ns/openshift-multus pod/multus-vjvg7 node/ip-10-0-140-89.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 28 18:48:07.730 E ns/openshift-sdn pod/sdn-controller-ng9cx node/ip-10-0-140-89.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0228 18:33:58.357144       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0228 18:33:58.375580       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"b5621f9f-7bdc-40cf-97e4-233ab39903f1", ResourceVersion:"31482", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718509447, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-140-89\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-28T17:57:27Z\",\"renewTime\":\"2020-02-28T18:33:58Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-140-89 became leader'\nI0228 18:33:58.375655       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0228 18:33:58.388585       1 master.go:51] Initializing SDN master\nI0228 18:33:58.396961       1 network_controller.go:61] Started OpenShift Network Controller\n
Feb 28 18:48:07.742 E ns/openshift-sdn pod/ovs-8848h node/ip-10-0-140-89.us-west-2.compute.internal container=openvswitch container exited with code 1 (Error): ow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:21.034Z|00095|connmgr|INFO|br0<->unix#614: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:21.066Z|00096|bridge|INFO|bridge br0: deleted interface veth6310fb62 on port 41\n2020-02-28T18:45:21.314Z|00097|connmgr|INFO|br0<->unix#617: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:21.386Z|00098|connmgr|INFO|br0<->unix#620: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:21.417Z|00099|bridge|INFO|bridge br0: deleted interface veth4bd55425 on port 47\n2020-02-28T18:45:21.635Z|00100|connmgr|INFO|br0<->unix#623: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:21.665Z|00101|connmgr|INFO|br0<->unix#626: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:21.685Z|00102|bridge|INFO|bridge br0: deleted interface veth3f0f8967 on port 50\n2020-02-28T18:45:24.178Z|00103|bridge|INFO|bridge br0: added interface veth47d1fe7f on port 59\n2020-02-28T18:45:24.209Z|00104|connmgr|INFO|br0<->unix#631: 5 flow_mods in the last 0 s (5 adds)\n2020-02-28T18:45:24.250Z|00105|connmgr|INFO|br0<->unix#635: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-28T18:45:24.257Z|00106|connmgr|INFO|br0<->unix#637: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:28.070Z|00107|bridge|INFO|bridge br0: added interface veth758b5374 on port 60\n2020-02-28T18:45:28.121Z|00108|connmgr|INFO|br0<->unix#643: 5 flow_mods in the last 0 s (5 adds)\n2020-02-28T18:45:28.190Z|00109|connmgr|INFO|br0<->unix#648: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:28.191Z|00110|connmgr|INFO|br0<->unix#649: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-28T18:45:28.217Z|00111|connmgr|INFO|br0<->unix#652: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:45:28.256Z|00112|connmgr|INFO|br0<->unix#655: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:45:28.279Z|00113|bridge|INFO|bridge br0: deleted interface veth47d1fe7f on port 59\ninfo: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Feb 28 18:48:07.765 E ns/openshift-machine-config-operator pod/machine-config-daemon-sh6ph node/ip-10-0-140-89.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:48:08.193 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-565479b79d-nhxlw node/ip-10-0-129-69.us-west-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Feb 28 18:48:08.235 E ns/openshift-marketplace pod/redhat-marketplace-7596dd7999-x6phd node/ip-10-0-129-69.us-west-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Feb 28 18:48:08.247 E ns/openshift-monitoring pod/kube-state-metrics-546956b6d-qsws8 node/ip-10-0-129-69.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Feb 28 18:48:08.258 E ns/openshift-monitoring pod/prometheus-adapter-84479fcfb8-jtxn2 node/ip-10-0-129-69.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0228 18:26:16.541600       1 adapter.go:93] successfully using in-cluster auth\nI0228 18:26:17.257584       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 28 18:48:08.271 E ns/openshift-ingress pod/router-default-7769c9f68d-49zf6 node/ip-10-0-129-69.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:45:46.535041       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:45:51.534445       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:45:56.528444       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:46:26.430811       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:46:31.418171       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:46:46.745672       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:46:51.736451       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:47:54.285649       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:47:59.270292       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:48:04.272653       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 28 18:48:09.764 E ns/openshift-monitoring pod/grafana-77b655d84f-bh4ll node/ip-10-0-129-69.us-west-2.compute.internal container=grafana-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:48:09.764 E ns/openshift-monitoring pod/grafana-77b655d84f-bh4ll node/ip-10-0-129-69.us-west-2.compute.internal container=grafana container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:48:09.792 E ns/openshift-monitoring pod/thanos-querier-5845666899-v9x97 node/ip-10-0-129-69.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/28 18:26:18 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/28 18:26:18 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/28 18:26:18 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/28 18:26:18 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/28 18:26:18 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/28 18:26:18 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/28 18:26:18 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/28 18:26:18 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/28 18:26:18 http.go:107: HTTPS: listening on [::]:9091\nI0228 18:26:18.682224       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Feb 28 18:48:09.818 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-69.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/28 18:27:30 Watching directory: "/etc/alertmanager/config"\n
Feb 28 18:48:09.818 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-129-69.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/28 18:27:30 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/28 18:27:30 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/28 18:27:30 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/28 18:27:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/28 18:27:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/28 18:27:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/28 18:27:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0228 18:27:30.933420       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/28 18:27:30 http.go:107: HTTPS: listening on [::]:9095\n
Feb 28 18:48:09.961 E ns/openshift-marketplace pod/certified-operators-58d5879477-csczx node/ip-10-0-129-69.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Feb 28 18:48:15.207 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-89.us-west-2.compute.internal node/ip-10-0-140-89.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): k8s.io" failed with: OpenAPI spec does not exist\nI0228 18:44:53.033737       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.\nI0228 18:45:13.929382       1 cacher.go:782] cacher (*core.Pod): 1 objects queued in incoming channel.\nI0228 18:45:13.929547       1 cacher.go:782] cacher (*core.Pod): 2 objects queued in incoming channel.\nI0228 18:45:13.948086       1 cacher.go:782] cacher (*core.Endpoints): 1 objects queued in incoming channel.\n2020/02/28 18:45:21 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/28 18:45:21 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/28 18:45:21 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/28 18:45:21 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/28 18:45:21 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/28 18:45:21 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/28 18:45:21 httputil: ReverseProxy read error during body copy: unexpected EOF\nW0228 18:45:21.292653       1 reflector.go:326] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: watch of *v1.Group ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 3121; INTERNAL_ERROR") has prevented the request from succeeding\nE0228 18:45:29.827312       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}\nI0228 18:45:29.849580       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-140-89.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0228 18:45:29.849818       1 controller.go:180] Shutting down kubernetes service endpoint reconciler\n
Feb 28 18:48:15.207 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-89.us-west-2.compute.internal node/ip-10-0-140-89.us-west-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0228 18:23:48.879071       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 28 18:48:15.207 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-89.us-west-2.compute.internal node/ip-10-0-140-89.us-west-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0228 18:45:14.709771       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:45:14.710241       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0228 18:45:24.717103       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:45:24.717356       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 28 18:48:15.207 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-89.us-west-2.compute.internal node/ip-10-0-140-89.us-west-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): ster.local]\nI0228 18:44:43.671961       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0228 18:45:29.847087       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:45:29.847691       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0228 18:45:29.847772       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0228 18:45:29.848258       1 cabundlesyncer.go:86] CA bundle controller shut down\nI0228 18:45:29.847779       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0228 18:45:29.847787       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0228 18:45:29.847792       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0228 18:45:29.847798       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0228 18:45:29.847803       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0228 18:45:29.847810       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0228 18:45:29.847826       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0228 18:45:29.847833       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0228 18:45:29.847839       1 certrotationcontroller.go:560] Shutting down CertRotation\nE0228 18:45:30.030538       1 leaderelection.go:307] Failed to release lock: Put https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/configmaps/cert-regeneration-controller-lock?timeout=35s: unexpected EOF\nF0228 18:45:30.030782       1 leaderelection.go:67] leaderelection lost\n
Feb 28 18:48:15.231 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-89.us-west-2.compute.internal node/ip-10-0-140-89.us-west-2.compute.internal container=cluster-policy-controller container exited with code 1 (Error): I0228 18:21:13.112453       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0228 18:21:13.114260       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0228 18:21:13.114582       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0228 18:23:45.886444       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cluster-policy-controller: dial tcp [::1]:6443: connect: connection refused\n
Feb 28 18:48:15.231 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-89.us-west-2.compute.internal node/ip-10-0-140-89.us-west-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:44:52.866246       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:44:52.866758       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:45:00.205534       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:45:00.205781       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:45:02.877464       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:45:02.877727       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:45:10.211629       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:45:10.211883       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:45:12.884512       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:45:12.884767       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:45:20.220025       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:45:20.220415       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:45:22.910558       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:45:22.910826       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:45:29.838193       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:45:29.838580       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 28 18:48:15.231 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-89.us-west-2.compute.internal node/ip-10-0-140-89.us-west-2.compute.internal container=kube-controller-manager container exited with code 2 (Error): tch stream event decoding: unexpected EOF\nI0228 18:45:30.058191       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058196       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058201       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058206       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058211       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058216       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058221       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058230       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058236       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058240       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058246       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058251       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058255       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058263       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058270       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058275       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.058279       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Feb 28 18:48:15.231 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-89.us-west-2.compute.internal node/ip-10-0-140-89.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error):       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0228 18:23:51.808008       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: configmaps "cert-recovery-controller-lock" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0228 18:23:51.917088       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0228 18:23:51.917235       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0228 18:23:51.917307       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0228 18:23:51.917373       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0228 18:23:51.917421       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nE0228 18:23:51.917511       1 csrcontroller.go:121] key failed with : configmaps "csr-signer-ca" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager-operator"\nE0228 18:23:51.917568       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0228 18:23:51.917607       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0228 18:23:51.917662       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: unknown (get secrets)\nI0228 18:45:29.816062       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:45:29.816496       1 csrcontroller.go:83] Shutting down CSR controller\nI0228 18:45:29.816510       1 csrcontroller.go:85] CSR controller shut down\nF0228 18:45:29.816635       1 builder.go:209] server exited\n
Feb 28 18:48:20.077 E ns/openshift-etcd pod/etcd-ip-10-0-140-89.us-west-2.compute.internal node/ip-10-0-140-89.us-west-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-28 18:20:19.909006 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-140-89.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-140-89.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-28 18:20:19.909647 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-28 18:20:19.910045 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-140-89.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-140-89.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-28 18:20:19.911815 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/02/28 18:20:19 grpc: addrConn.createTransport failed to connect to {https://etcd-0.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.140.89:9978: connect: connection refused". Reconnecting...\n
Feb 28 18:48:30.187 E ns/openshift-machine-config-operator pod/machine-config-daemon-sh6ph node/ip-10-0-140-89.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 28 18:48:46.678 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-137-137.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-28T18:48:31.919Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-28T18:48:31.924Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-28T18:48:31.925Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-28T18:48:31.926Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-28T18:48:31.926Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-28T18:48:31.926Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-28T18:48:31.926Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-28T18:48:31.926Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-28T18:48:31.926Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-28T18:48:31.926Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-28T18:48:31.926Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-28T18:48:31.926Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-28T18:48:31.926Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-28T18:48:31.926Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-28T18:48:31.927Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-28T18:48:31.927Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-28
Feb 28 18:48:58.759 E ns/openshift-console pod/console-7fc78c9686-gfldc node/ip-10-0-141-183.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020-02-28T18:45:21Z cmd/main: cookies are secure!\n2020-02-28T18:45:22Z cmd/main: Binding to [::]:8443...\n2020-02-28T18:45:22Z cmd/main: using TLS\n
Feb 28 18:48:59.140 E ns/openshift-service-ca pod/service-ca-64f58c6f95-2l2m7 node/ip-10-0-141-183.us-west-2.compute.internal container=service-ca-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:48:59.571 E ns/openshift-authentication pod/oauth-openshift-5c4f4b87f9-995ts node/ip-10-0-141-183.us-west-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:49:00.005 E ns/openshift-cluster-machine-approver pod/machine-approver-55c65fdb89-4m2d7 node/ip-10-0-141-183.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): 8:45:19.967663       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0228 18:45:19.967703       1 main.go:236] Starting Machine Approver\nI0228 18:45:20.067999       1 main.go:146] CSR csr-sscf8 added\nI0228 18:45:20.068029       1 main.go:149] CSR csr-sscf8 is already approved\nI0228 18:45:20.068054       1 main.go:146] CSR csr-5t4sm added\nI0228 18:45:20.068060       1 main.go:149] CSR csr-5t4sm is already approved\nI0228 18:45:20.068077       1 main.go:146] CSR csr-fc9h9 added\nI0228 18:45:20.068084       1 main.go:149] CSR csr-fc9h9 is already approved\nI0228 18:45:20.068097       1 main.go:146] CSR csr-hpsxf added\nI0228 18:45:20.068103       1 main.go:149] CSR csr-hpsxf is already approved\nI0228 18:45:20.068111       1 main.go:146] CSR csr-jhlqp added\nI0228 18:45:20.068117       1 main.go:149] CSR csr-jhlqp is already approved\nI0228 18:45:20.068125       1 main.go:146] CSR csr-m9gx5 added\nI0228 18:45:20.068131       1 main.go:149] CSR csr-m9gx5 is already approved\nI0228 18:45:20.068140       1 main.go:146] CSR csr-wpkmg added\nI0228 18:45:20.068145       1 main.go:149] CSR csr-wpkmg is already approved\nI0228 18:45:20.068152       1 main.go:146] CSR csr-2lcl2 added\nI0228 18:45:20.068157       1 main.go:149] CSR csr-2lcl2 is already approved\nI0228 18:45:20.068163       1 main.go:146] CSR csr-4wgw6 added\nI0228 18:45:20.068169       1 main.go:149] CSR csr-4wgw6 is already approved\nI0228 18:45:20.068176       1 main.go:146] CSR csr-8nwch added\nI0228 18:45:20.068181       1 main.go:149] CSR csr-8nwch is already approved\nI0228 18:45:20.068190       1 main.go:146] CSR csr-k8994 added\nI0228 18:45:20.068196       1 main.go:149] CSR csr-k8994 is already approved\nI0228 18:45:20.068203       1 main.go:146] CSR csr-pgtjz added\nI0228 18:45:20.068207       1 main.go:149] CSR csr-pgtjz is already approved\nW0228 18:45:31.340909       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 22932 (37412)\n
Feb 28 18:49:01.193 E ns/openshift-machine-config-operator pod/machine-config-operator-5897b9f86f-jbhgx node/ip-10-0-141-183.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): nfig...\nE0228 18:41:47.558722       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"8bd08afa-0423-4267-9983-aef1c91471de", ResourceVersion:"35287", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718509664, loc:(*time.Location)(0x27f7fa0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-5897b9f86f-jbhgx_b899d7cf-d3fd-4906-92b2-7caedb1f8ab8\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-02-28T18:41:47Z\",\"renewTime\":\"2020-02-28T18:41:47Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-5897b9f86f-jbhgx_b899d7cf-d3fd-4906-92b2-7caedb1f8ab8 became leader'\nI0228 18:41:47.558820       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0228 18:41:48.084611       1 operator.go:264] Starting MachineConfigOperator\nI0228 18:41:48.088455       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"1abc98cb-1ee7-4170-86e6-c0dbeb5fb3a4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-02-28-173655}] to [{operator 0.0.1-2020-02-28-174122}]\n
Feb 28 18:49:01.498 E ns/openshift-console pod/downloads-654777d558-m64rs node/ip-10-0-141-183.us-west-2.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:49:03.526 E ns/openshift-operator-lifecycle-manager pod/olm-operator-7bd9fbcd8d-ck6g4 node/ip-10-0-141-183.us-west-2.compute.internal container=olm-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:49:04.411 E ns/openshift-console-operator pod/console-operator-78c9f685c6-n9472 node/ip-10-0-141-183.us-west-2.compute.internal container=console-operator container exited with code 255 (Error): se ("DeploymentAvailable: 1 replicas ready at version 0.0.1-2020-02-28-174122")\nE0228 18:48:59.755020       1 status.go:73] DeploymentAvailable FailedUpdate 1 replicas ready at version 0.0.1-2020-02-28-174122\nI0228 18:49:01.946584       1 status_controller.go:176] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-28T18:05:45Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-28T18:27:49Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-28T18:49:01Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-28T18:05:45Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0228 18:49:01.965845       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"6e6a9712-84c7-4638-beff-2dbeca4b7c1f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from False to True ("")\nI0228 18:49:03.548328       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:49:03.549435       1 controller.go:70] Shutting down Console\nI0228 18:49:03.549644       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0228 18:49:03.549658       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0228 18:49:03.549665       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0228 18:49:03.549670       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0228 18:49:03.549675       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0228 18:49:03.549693       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0228 18:49:03.549698       1 status_controller.go:212] Shutting down StatusSyncer-console\nF0228 18:49:03.549723       1 builder.go:243] stopped\n
Feb 28 18:49:04.758 E ns/openshift-service-ca-operator pod/service-ca-operator-54675574cc-gvvg4 node/ip-10-0-141-183.us-west-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:49:17.040 E ns/openshift-operator-lifecycle-manager pod/packageserver-64c6c6d778-64h2k node/ip-10-0-140-89.us-west-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:49:18.418 E kube-apiserver Kube API started failing: Get https://api.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Feb 28 18:49:24.005 E ns/openshift-machine-api pod/cluster-autoscaler-operator-6895d5cd65-w5wk7 node/ip-10-0-140-89.us-west-2.compute.internal container=cluster-autoscaler-operator container exited with code 255 (Error): I0228 18:49:22.932343       1 main.go:13] Go Version: go1.12.16\nI0228 18:49:22.932581       1 main.go:14] Go OS/Arch: linux/amd64\nI0228 18:49:22.932610       1 main.go:15] Version: cluster-autoscaler-operator v0.0.0-240-g923fdba-dirty\nF0228 18:49:22.934748       1 main.go:33] Failed to create operator: failed to create manager: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 28 18:49:31.422 E ns/openshift-marketplace pod/marketplace-operator-758968db87-rb8m4 node/ip-10-0-140-89.us-west-2.compute.internal container=marketplace-operator container exited with code 1 (Error): 
Feb 28 18:49:31.600 E ns/openshift-ingress-operator pod/ingress-operator-55df95b99c-4zmzk node/ip-10-0-140-89.us-west-2.compute.internal container=ingress-operator container exited with code 1 (Error): 2020-02-28T18:49:28.138Z	ERROR	operator.main	ingress-operator/start.go:84	failed to create kube client	{"error": "failed to discover api rest mapper: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"}\n
Feb 28 18:49:32.588 E ns/openshift-operator-lifecycle-manager pod/packageserver-64d69d6b9c-j6pcp node/ip-10-0-140-89.us-west-2.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-02-28T18:49:31Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 28 18:49:34.626 E ns/openshift-operator-lifecycle-manager pod/packageserver-64d69d6b9c-j6pcp node/ip-10-0-140-89.us-west-2.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-02-28T18:49:33Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 28 18:50:47.194 E ns/openshift-marketplace pod/community-operators-5f7d794877-sts4m node/ip-10-0-150-39.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Feb 28 18:50:59.938 E ns/openshift-cluster-node-tuning-operator pod/tuned-m8dwk node/ip-10-0-129-69.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 2.compute.internal" added, tuned profile requested: openshift-node\nI0228 18:26:27.755671    2101 tuned.go:170] disabling system tuned...\nI0228 18:26:27.755536    2101 tuned.go:521] tuned "rendered" added\nI0228 18:26:27.755716    2101 tuned.go:219] extracting tuned profiles\nI0228 18:26:27.760084    2101 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0228 18:26:28.742816    2101 tuned.go:393] getting recommended profile...\nI0228 18:26:28.888583    2101 tuned.go:421] active profile () != recommended profile (openshift-node)\nI0228 18:26:28.888662    2101 tuned.go:286] starting tuned...\n2020-02-28 18:26:29,002 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-28 18:26:29,011 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-28 18:26:29,012 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-28 18:26:29,012 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-28 18:26:29,013 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-28 18:26:29,054 INFO     tuned.daemon.controller: starting controller\n2020-02-28 18:26:29,054 INFO     tuned.daemon.daemon: starting tuning\n2020-02-28 18:26:29,065 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-28 18:26:29,066 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-28 18:26:29,069 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-28 18:26:29,073 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-28 18:26:29,074 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-28 18:26:29,188 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-28 18:26:29,200 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Feb 28 18:50:59.984 E ns/openshift-monitoring pod/node-exporter-gxvmm node/ip-10-0-129-69.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:48:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:48:22Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:48:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:48:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:48:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:49:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:49:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 28 18:51:00.022 E ns/openshift-sdn pod/ovs-7lmrz node/ip-10-0-129-69.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): w_mods in the last 0 s (2 deletes)\n2020-02-28T18:48:07.454Z|00124|connmgr|INFO|br0<->unix#702: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:48:07.483Z|00125|bridge|INFO|bridge br0: deleted interface vethc6116202 on port 33\n2020-02-28T18:48:07.528Z|00126|connmgr|INFO|br0<->unix#705: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:48:07.567Z|00127|connmgr|INFO|br0<->unix#708: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:48:07.596Z|00128|bridge|INFO|bridge br0: deleted interface veth0b7aa3cd on port 34\n2020-02-28T18:48:07.690Z|00129|connmgr|INFO|br0<->unix#711: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:48:07.734Z|00130|connmgr|INFO|br0<->unix#714: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:48:07.775Z|00131|bridge|INFO|bridge br0: deleted interface vethc90da7ad on port 20\n2020-02-28T18:48:07.826Z|00132|connmgr|INFO|br0<->unix#717: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:48:07.877Z|00133|connmgr|INFO|br0<->unix#720: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:48:07.904Z|00134|bridge|INFO|bridge br0: deleted interface veth9e0df13d on port 25\n2020-02-28T18:48:07.948Z|00135|connmgr|INFO|br0<->unix#723: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:48:07.997Z|00136|connmgr|INFO|br0<->unix#726: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:48:08.021Z|00137|bridge|INFO|bridge br0: deleted interface vethed78fde6 on port 24\n2020-02-28T18:48:35.744Z|00138|connmgr|INFO|br0<->unix#750: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:48:35.771Z|00139|connmgr|INFO|br0<->unix#753: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:48:35.794Z|00140|bridge|INFO|bridge br0: deleted interface veth9957a718 on port 35\n2020-02-28T18:48:51.070Z|00141|connmgr|INFO|br0<->unix#768: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:48:51.098Z|00142|connmgr|INFO|br0<->unix#771: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:48:51.120Z|00143|bridge|INFO|bridge br0: deleted interface veth7e07630f on port 17\ninfo: Saving flows ...\nTerminated\n
Feb 28 18:51:00.024 E ns/openshift-multus pod/multus-zj878 node/ip-10-0-129-69.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 28 18:51:00.067 E ns/openshift-machine-config-operator pod/machine-config-daemon-gkqds node/ip-10-0-129-69.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:51:04.768 E ns/openshift-multus pod/multus-zj878 node/ip-10-0-129-69.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 28 18:51:08.670 E ns/openshift-machine-config-operator pod/machine-config-daemon-gkqds node/ip-10-0-129-69.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 28 18:51:17.645 E ns/openshift-ingress pod/router-default-7769c9f68d-jrf6p node/ip-10-0-150-39.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:50:07.676076       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:50:19.541420       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:50:24.540587       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:50:33.817253       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:50:38.869009       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:50:46.033011       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:50:51.020213       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:51:03.796335       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:51:08.789417       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0228 18:51:15.625653       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 28 18:51:17.821 E ns/openshift-monitoring pod/prometheus-adapter-84479fcfb8-sldfc node/ip-10-0-150-39.us-west-2.compute.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:51:17.881 E ns/openshift-monitoring pod/telemeter-client-5df5b9864f-587sl node/ip-10-0-150-39.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Feb 28 18:51:17.881 E ns/openshift-monitoring pod/telemeter-client-5df5b9864f-587sl node/ip-10-0-150-39.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Feb 28 18:51:18.879 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-150-39.us-west-2.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:51:18.879 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-150-39.us-west-2.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:51:18.879 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-150-39.us-west-2.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:51:18.918 E ns/openshift-console pod/downloads-654777d558-q8d6z node/ip-10-0-150-39.us-west-2.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:51:18.934 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-150-39.us-west-2.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:51:18.934 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-150-39.us-west-2.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:51:18.934 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-150-39.us-west-2.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:51:29.445 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-66b464f5fd-kgntn node/ip-10-0-129-69.us-west-2.compute.internal container=snapshot-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:51:35.461 E ns/openshift-cluster-node-tuning-operator pod/tuned-6fbhl node/ip-10-0-141-183.us-west-2.compute.internal container=tuned container exited with code 143 (Error): ed.go:521] tuned "rendered" added\nI0228 18:26:40.456906     586 tuned.go:219] extracting tuned profiles\nI0228 18:26:40.460823     586 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0228 18:26:41.445921     586 tuned.go:393] getting recommended profile...\nI0228 18:26:41.620383     586 tuned.go:421] active profile () != recommended profile (openshift-control-plane)\nI0228 18:26:41.620449     586 tuned.go:286] starting tuned...\n2020-02-28 18:26:41,765 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-28 18:26:41,778 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-28 18:26:41,778 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-28 18:26:41,779 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-02-28 18:26:41,779 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-02-28 18:26:41,818 INFO     tuned.daemon.controller: starting controller\n2020-02-28 18:26:41,818 INFO     tuned.daemon.daemon: starting tuning\n2020-02-28 18:26:41,827 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-28 18:26:41,828 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-28 18:26:41,832 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-28 18:26:41,834 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-02-28 18:26:41,835 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-02-28 18:26:41,913 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-28 18:26:41,920 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0228 18:49:18.041209     586 tuned.go:115] received signal: terminated\nI0228 18:49:18.041342     586 tuned.go:327] sending TERM to PID 698\n
Feb 28 18:51:35.506 E ns/openshift-sdn pod/ovs-l9hd8 node/ip-10-0-141-183.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): br0<->unix#925: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:49:03.887Z|00197|connmgr|INFO|br0<->unix#928: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:49:03.908Z|00198|bridge|INFO|bridge br0: deleted interface veth48159263 on port 85\n2020-02-28T18:49:04.408Z|00199|connmgr|INFO|br0<->unix#931: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:49:04.459Z|00200|connmgr|INFO|br0<->unix#934: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:49:04.502Z|00201|bridge|INFO|bridge br0: deleted interface vethb2e85a4b on port 65\n2020-02-28T18:49:04.677Z|00202|connmgr|INFO|br0<->unix#937: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:49:04.723Z|00203|connmgr|INFO|br0<->unix#940: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:49:04.757Z|00204|bridge|INFO|bridge br0: deleted interface vethf40e31c5 on port 69\n2020-02-28T18:49:04.943Z|00205|connmgr|INFO|br0<->unix#943: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:49:04.977Z|00206|connmgr|INFO|br0<->unix#946: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:49:05.000Z|00207|bridge|INFO|bridge br0: deleted interface veth99a0d9cf on port 66\n2020-02-28T18:49:11.239Z|00208|bridge|INFO|bridge br0: added interface veth6e18dcf6 on port 96\n2020-02-28T18:49:11.274Z|00209|connmgr|INFO|br0<->unix#955: 5 flow_mods in the last 0 s (5 adds)\n2020-02-28T18:49:11.328Z|00210|connmgr|INFO|br0<->unix#960: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-02-28T18:49:11.328Z|00211|connmgr|INFO|br0<->unix#961: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:49:14.400Z|00212|connmgr|INFO|br0<->unix#964: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:49:14.428Z|00213|connmgr|INFO|br0<->unix#967: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:49:14.451Z|00214|bridge|INFO|bridge br0: deleted interface veth6e18dcf6 on port 96\ninfo: Saving flows ...\n2020-02-28T18:49:18Z|00001|jsonrpc|WARN|unix:/var/run/openvswitch/db.sock: send error: Broken pipe\n2020-02-28T18:49:18Z|00002|fatal_signal|WARN|terminating with signal 15 (Terminated)\n
Feb 28 18:51:35.523 E ns/openshift-monitoring pod/node-exporter-f7vlh node/ip-10-0-141-183.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -28T18:26:21Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-28T18:26:21Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 28 18:51:35.552 E ns/openshift-multus pod/multus-mqfr7 node/ip-10-0-141-183.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 28 18:51:35.575 E ns/openshift-machine-config-operator pod/machine-config-daemon-wl48v node/ip-10-0-141-183.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:51:35.586 E ns/openshift-sdn pod/sdn-controller-zvx2s node/ip-10-0-141-183.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0228 18:33:33.840276       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0228 18:46:33.924526       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"b5621f9f-7bdc-40cf-97e4-233ab39903f1", ResourceVersion:"37919", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718509447, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-141-183\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-28T18:46:33Z\",\"renewTime\":\"2020-02-28T18:46:33Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-141-183 became leader'\nI0228 18:46:33.924720       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0228 18:46:33.932249       1 master.go:51] Initializing SDN master\nI0228 18:46:34.635001       1 network_controller.go:61] Started OpenShift Network Controller\n
Feb 28 18:51:35.598 E ns/openshift-machine-config-operator pod/machine-config-server-dj7h9 node/ip-10-0-141-183.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0228 18:45:29.511734       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-310-g1f107922-dirty (1f1079225b3a5464455047cdfa2af4d471da3597)\nI0228 18:45:29.513029       1 api.go:51] Launching server on :22624\nI0228 18:45:29.514259       1 api.go:51] Launching server on :22623\n
Feb 28 18:51:35.620 E ns/openshift-controller-manager pod/controller-manager-wg7gb node/ip-10-0-141-183.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): I0228 18:27:35.793484       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0228 18:27:35.795704       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable@sha256:a3d84f419db9032b07494e17bf5f6ee7a928c92e5c6ff959deef9dc128b865cc"\nI0228 18:27:35.795874       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-v86zwsvl/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0228 18:27:35.795840       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0228 18:27:35.808437       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Feb 28 18:51:35.634 E ns/openshift-multus pod/multus-admission-controller-thjqh node/ip-10-0-141-183.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 28 18:51:35.648 E ns/openshift-etcd pod/etcd-ip-10-0-141-183.us-west-2.compute.internal node/ip-10-0-141-183.us-west-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-28 18:22:18.343330 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-141-183.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-141-183.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-28 18:22:18.343976 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-28 18:22:18.344301 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-141-183.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-141-183.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-28 18:22:18.346574 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/02/28 18:22:18 grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.141.183:9978: connect: connection refused". Reconnecting...\n
Feb 28 18:51:40.885 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-183.us-west-2.compute.internal node/ip-10-0-141-183.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): onn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.141.183:2379: connect: connection refused". Reconnecting...\nW0228 18:49:18.125584       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.141.183:2379: connect: connection refused". Reconnecting...\nI0228 18:49:18.125656       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0228 18:49:18.125699       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.141.183:2379: connect: connection refused". Reconnecting...\nW0228 18:49:18.144511       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.141.183:2379: connect: connection refused". Reconnecting...\nW0228 18:49:18.144912       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.141.183:2379: connect: connection refused". Reconnecting...\nW0228 18:49:18.202963       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.141.183:2379: connect: connection refused". Reconnecting...\n
Feb 28 18:51:40.885 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-183.us-west-2.compute.internal node/ip-10-0-141-183.us-west-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0228 18:21:43.665837       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 28 18:51:40.885 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-183.us-west-2.compute.internal node/ip-10-0-141-183.us-west-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0228 18:49:04.783425       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:49:04.783810       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0228 18:49:14.793346       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:49:14.794723       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 28 18:51:40.885 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-183.us-west-2.compute.internal node/ip-10-0-141-183.us-west-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): go:26] syncing external loadbalancer hostnames: api.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0228 18:45:34.205612       1 client_cert_rotation_controller.go:121] Waiting for CertRotationController - "ExternalLoadBalancerServing"\nI0228 18:45:34.206320       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "ExternalLoadBalancerServing"\nI0228 18:49:18.094662       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:49:18.095057       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0228 18:49:18.095076       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0228 18:49:18.095086       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0228 18:49:18.095099       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0228 18:49:18.095109       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0228 18:49:18.095121       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0228 18:49:18.095134       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0228 18:49:18.095146       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0228 18:49:18.095156       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0228 18:49:18.095168       1 certrotationcontroller.go:560] Shutting down CertRotation\nI0228 18:49:18.095175       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0228 18:49:18.095182       1 cabundlesyncer.go:86] CA bundle controller shut down\nF0228 18:49:18.189973       1 leaderelection.go:67] leaderelection lost\n
Feb 28 18:51:40.922 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-183.us-west-2.compute.internal node/ip-10-0-141-183.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 1 (Error): 157] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-28 17:42:34 +0000 UTC to 2030-02-25 17:42:34 +0000 UTC (now=2020-02-28 18:23:40.688415951 +0000 UTC))\nI0228 18:23:40.688458       1 tlsconfig.go:157] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1582912868" [] issuer="kubelet-signer" (2020-02-28 18:01:07 +0000 UTC to 2020-02-29 17:42:39 +0000 UTC (now=2020-02-28 18:23:40.68844601 +0000 UTC))\nI0228 18:23:40.688486       1 tlsconfig.go:157] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-02-28 17:42:37 +0000 UTC to 2020-02-29 17:42:37 +0000 UTC (now=2020-02-28 18:23:40.688474795 +0000 UTC))\nI0228 18:23:40.688830       1 tlsconfig.go:179] loaded serving cert ["serving-cert::/tmp/serving-cert-218813323/tls.crt::/tmp/serving-cert-218813323/tls.key"]: "localhost" [serving] validServingFor=[localhost] issuer="cert-recovery-controller-signer@1582914219" (2020-02-28 18:23:39 +0000 UTC to 2020-03-29 18:23:40 +0000 UTC (now=2020-02-28 18:23:40.68881388 +0000 UTC))\nI0228 18:23:40.689153       1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582914220" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582914220" (2020-02-28 17:23:40 +0000 UTC to 2021-02-27 17:23:40 +0000 UTC (now=2020-02-28 18:23:40.68913988 +0000 UTC))\nI0228 18:49:18.069780       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0228 18:49:18.069812       1 leaderelection.go:67] leaderelection lost\n
Feb 28 18:51:40.922 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-183.us-west-2.compute.internal node/ip-10-0-141-183.us-west-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:48:40.558518       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:48:40.559114       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:48:45.141442       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:48:45.141953       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:48:50.572694       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:48:50.573426       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:48:55.161204       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:48:55.161622       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:49:00.586822       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:49:00.587231       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:49:05.175360       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:49:05.175706       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:49:10.602460       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:49:10.602951       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:49:15.196486       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:49:15.196827       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 28 18:51:40.922 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-183.us-west-2.compute.internal node/ip-10-0-141-183.us-west-2.compute.internal container=kube-controller-manager container exited with code 2 (Error): loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-28 17:42:34 +0000 UTC to 2030-02-25 17:42:34 +0000 UTC (now=2020-02-28 18:23:36.713979011 +0000 UTC))\nI0228 18:23:36.713994       1 tlsconfig.go:179] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-02-28 17:42:37 +0000 UTC to 2020-02-29 17:42:37 +0000 UTC (now=2020-02-28 18:23:36.713989454 +0000 UTC))\nI0228 18:23:36.714167       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1582912872" (2020-02-28 18:01:17 +0000 UTC to 2022-02-27 18:01:18 +0000 UTC (now=2020-02-28 18:23:36.714159127 +0000 UTC))\nI0228 18:23:36.714335       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582914216" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582914216" (2020-02-28 17:23:35 +0000 UTC to 2021-02-27 17:23:35 +0000 UTC (now=2020-02-28 18:23:36.714327418 +0000 UTC))\nI0228 18:23:36.714364       1 secure_serving.go:178] Serving securely on [::]:10257\nI0228 18:23:36.714400       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0228 18:23:36.714451       1 tlsconfig.go:241] Starting DynamicServingCertificateController\n
Feb 28 18:51:40.922 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-183.us-west-2.compute.internal node/ip-10-0-141-183.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): e&resourceVersion=37412&timeout=5m0s&timeoutSeconds=300&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0228 18:49:55.603014       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nI0228 18:49:55.603090       1 reconciliation_controller.go:152] Shutting down ClusterQuotaReconcilationController\nI0228 18:49:55.603091       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-141-183 stopped leading\nI0228 18:49:55.603165       1 reconciliation_controller.go:307] resource quota controller worker shutting down\nI0228 18:49:55.603174       1 reconciliation_controller.go:307] resource quota controller worker shutting down\nI0228 18:49:55.603179       1 reconciliation_controller.go:307] resource quota controller worker shutting down\nI0228 18:49:55.603186       1 reconciliation_controller.go:307] resource quota controller worker shutting down\nI0228 18:49:55.603185       1 reconciliation_controller.go:307] resource quota controller worker shutting down\nI0228 18:49:55.603206       1 clusterquotamapping.go:142] Shutting down ClusterQuotaMappingController controller\nI0228 18:49:55.603226       1 resource_quota_controller.go:290] Shutting down resource quota controller\nI0228 18:49:55.603338       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0228 18:49:55.603357       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0228 18:49:55.603364       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0228 18:49:55.603371       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nI0228 18:49:55.603370       1 resource_quota_controller.go:259] resource quota controller worker shutting down\nF0228 18:49:55.603384       1 policy_controller.go:94] leaderelection lost\n
Feb 28 18:51:40.998 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-129-69.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-02-28T18:51:39.107Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-28T18:51:39.107Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-28T18:51:39.107Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-28T18:51:39.108Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-28T18:51:39.108Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-02-28T18:51:39.108Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-28T18:51:39.109Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-28T18:51:39.109Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-28T18:51:39.109Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-28T18:51:39.109Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-28T18:51:39.109Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-28T18:51:39.109Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-28T18:51:39.109Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-28T18:51:39.109Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-02-28T18:51:39.111Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-28T18:51:39.111Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-02-28
Feb 28 18:51:42.025 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-141-183.us-west-2.compute.internal node/ip-10-0-141-183.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): -affinity, 2 node(s) were unschedulable.; waiting\nE0228 18:49:09.778633       1 factory.go:494] pod is already present in the activeQ\nI0228 18:49:09.781348       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-fd49c7ddb-vs45d: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0228 18:49:11.357371       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-fd49c7ddb-vs45d: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0228 18:49:13.182916       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-765c447484-4x4dl: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0228 18:49:13.603013       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-fd49c7ddb-vs45d: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0228 18:49:13.681576       1 scheduler.go:751] pod openshift-monitoring/alertmanager-main-0 is bound successfully on node "ip-10-0-137-137.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0228 18:49:17.043653       1 scheduler.go:751] pod openshift-operator-lifecycle-manager/packageserver-64d69d6b9c-j6pcp is bound successfully on node "ip-10-0-140-89.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0228 18:49:18.368098       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-fd49c7ddb-vs45d: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\n
Feb 28 18:51:47.170 E ns/openshift-machine-config-operator pod/machine-config-daemon-wl48v node/ip-10-0-141-183.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 28 18:54:03.447 E ns/openshift-cluster-node-tuning-operator pod/tuned-zvc7r node/ip-10-0-150-39.us-west-2.compute.internal container=tuned container exited with code 143 (Error): ecommended profile (openshift-node)\nI0228 18:26:16.951514    1117 tuned.go:286] starting tuned...\n2020-02-28 18:26:17,068 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-02-28 18:26:17,075 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-02-28 18:26:17,075 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-02-28 18:26:17,076 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-02-28 18:26:17,077 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-02-28 18:26:17,138 INFO     tuned.daemon.controller: starting controller\n2020-02-28 18:26:17,138 INFO     tuned.daemon.daemon: starting tuning\n2020-02-28 18:26:17,149 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-28 18:26:17,150 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-28 18:26:17,153 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-28 18:26:17,156 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-28 18:26:17,157 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-28 18:26:17,303 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-28 18:26:17,310 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0228 18:45:30.026825    1117 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.026818    1117 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0228 18:45:30.034894    1117 reflector.go:340] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: watch of *v1.Tuned ended with: very short watch: github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: Unexpected watch close - watch lasted less than a second and no items received\n
Feb 28 18:54:03.470 E ns/openshift-monitoring pod/node-exporter-4mzfr node/ip-10-0-150-39.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:51:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:51:14Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:51:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:51:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:51:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:51:59Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-02-28T18:52:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Feb 28 18:54:03.487 E ns/openshift-multus pod/multus-7lqt9 node/ip-10-0-150-39.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 28 18:54:03.532 E ns/openshift-sdn pod/ovs-l7x6b node/ip-10-0-150-39.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): 743: receive error: Connection reset by peer\n2020-02-28T18:51:17.668Z|00012|reconnect|WARN|unix#743: connection dropped (Connection reset by peer)\n2020-02-28T18:51:17.803Z|00013|jsonrpc|WARN|unix#748: receive error: Connection reset by peer\n2020-02-28T18:51:17.803Z|00014|reconnect|WARN|unix#748: connection dropped (Connection reset by peer)\n2020-02-28T18:51:17.366Z|00116|connmgr|INFO|br0<->unix#838: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:51:17.391Z|00117|bridge|INFO|bridge br0: deleted interface veth82d60b80 on port 33\n2020-02-28T18:51:17.438Z|00118|connmgr|INFO|br0<->unix#841: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:51:17.472Z|00119|connmgr|INFO|br0<->unix#844: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:51:17.510Z|00120|bridge|INFO|bridge br0: deleted interface vetha5e3e898 on port 42\n2020-02-28T18:51:17.572Z|00121|connmgr|INFO|br0<->unix#847: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:51:17.645Z|00122|connmgr|INFO|br0<->unix#850: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:51:17.676Z|00123|bridge|INFO|bridge br0: deleted interface veth12027cda on port 31\n2020-02-28T18:51:17.724Z|00124|connmgr|INFO|br0<->unix#853: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:51:17.779Z|00125|connmgr|INFO|br0<->unix#856: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:51:17.811Z|00126|bridge|INFO|bridge br0: deleted interface veth8629f863 on port 29\n2020-02-28T18:51:45.916Z|00127|connmgr|INFO|br0<->unix#878: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:51:45.945Z|00128|connmgr|INFO|br0<->unix#881: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:51:45.968Z|00129|bridge|INFO|bridge br0: deleted interface vethb40f49ab on port 40\n2020-02-28T18:52:01.296Z|00130|connmgr|INFO|br0<->unix#897: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:52:01.325Z|00131|connmgr|INFO|br0<->unix#900: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:52:01.348Z|00132|bridge|INFO|bridge br0: deleted interface veth2bc50ddb on port 38\ninfo: Saving flows ...\n
Feb 28 18:54:03.550 E ns/openshift-machine-config-operator pod/machine-config-daemon-8zs28 node/ip-10-0-150-39.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:54:07.416 E ns/openshift-multus pod/multus-7lqt9 node/ip-10-0-150-39.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 28 18:54:14.002 E ns/openshift-machine-config-operator pod/machine-config-daemon-8zs28 node/ip-10-0-150-39.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 28 18:54:47.352 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Feb 28 18:56:55.668 E ns/openshift-monitoring pod/thanos-querier-5845666899-s57xp node/ip-10-0-153-236.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/28 18:51:25 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/28 18:51:25 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/28 18:51:25 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/28 18:51:25 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/28 18:51:25 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/28 18:51:25 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/28 18:51:25 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/28 18:51:25 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0228 18:51:25.942037       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/02/28 18:51:25 http.go:107: HTTPS: listening on [::]:9091\n
Feb 28 18:56:55.717 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-7bd764bccf-zp5jh node/ip-10-0-153-236.us-west-2.compute.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): 6-879e-22395b08646e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nE0228 18:45:30.068446       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments?allowWatchBookmarks=true&resourceVersion=37080&timeout=7m47s&timeoutSeconds=467&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0228 18:45:30.072023       1 reflector.go:340] k8s.io/client-go/informers/factory.go:135: watch of *v1.Secret ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\nI0228 18:51:15.815598       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"d6f6da65-2843-42c6-879e-22395b08646e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from True to False ("Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available")\nI0228 18:51:27.886706       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"d6f6da65-2843-42c6-879e-22395b08646e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0228 18:56:53.221386       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0228 18:56:53.221427       1 leaderelection.go:66] leaderelection lost\n
Feb 28 18:56:56.699 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-7c8f4tncm node/ip-10-0-153-236.us-west-2.compute.internal container=operator container exited with code 255 (Error): 383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 0 items received\nI0228 18:54:35.449324       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 0 items received\nI0228 18:54:37.801183       1 httplog.go:90] GET /metrics: (5.742844ms) 200 [Prometheus/2.15.2 10.129.2.14:42512]\nI0228 18:54:45.099513       1 httplog.go:90] GET /metrics: (1.317214ms) 200 [Prometheus/2.15.2 10.128.2.19:48058]\nI0228 18:54:52.756179       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 1 items received\nI0228 18:54:57.839835       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 1 items received\nI0228 18:55:07.802069       1 httplog.go:90] GET /metrics: (6.581932ms) 200 [Prometheus/2.15.2 10.129.2.14:42512]\nI0228 18:55:15.099430       1 httplog.go:90] GET /metrics: (1.249524ms) 200 [Prometheus/2.15.2 10.128.2.19:48058]\nI0228 18:55:37.801045       1 httplog.go:90] GET /metrics: (5.626277ms) 200 [Prometheus/2.15.2 10.129.2.14:42512]\nI0228 18:55:45.099380       1 httplog.go:90] GET /metrics: (1.280086ms) 200 [Prometheus/2.15.2 10.128.2.19:48058]\nI0228 18:56:07.801858       1 httplog.go:90] GET /metrics: (6.325002ms) 200 [Prometheus/2.15.2 10.129.2.14:42512]\nI0228 18:56:15.099394       1 httplog.go:90] GET /metrics: (1.29719ms) 200 [Prometheus/2.15.2 10.128.2.19:48058]\nI0228 18:56:37.801132       1 httplog.go:90] GET /metrics: (5.615802ms) 200 [Prometheus/2.15.2 10.129.2.14:42512]\nI0228 18:56:45.099185       1 httplog.go:90] GET /metrics: (1.103772ms) 200 [Prometheus/2.15.2 10.128.2.19:48058]\nI0228 18:56:52.130856       1 reflector.go:383] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: Watch close - *v1.ServiceCatalogControllerManager total 0 items received\nI0228 18:56:54.302101       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0228 18:56:54.302205       1 leaderelection.go:66] leaderelection lost\n
Feb 28 18:56:56.787 E ns/openshift-machine-config-operator pod/machine-config-controller-698956b4fc-x7pd8 node/ip-10-0-153-236.us-west-2.compute.internal container=machine-config-controller container exited with code 2 (Error): compute.internal is reporting OutOfDisk=Unknown\nI0228 18:54:03.281068       1 node_controller.go:433] Pool worker: node ip-10-0-150-39.us-west-2.compute.internal is now reporting unready: node ip-10-0-150-39.us-west-2.compute.internal is reporting NotReady=False\nI0228 18:54:12.837483       1 node_controller.go:433] Pool worker: node ip-10-0-150-39.us-west-2.compute.internal is now reporting unready: node ip-10-0-150-39.us-west-2.compute.internal is reporting Unschedulable\nI0228 18:54:18.570998       1 node_controller.go:442] Pool worker: node ip-10-0-150-39.us-west-2.compute.internal has completed update to rendered-worker-87e6cedecd16da69bcfa334f100d26a6\nI0228 18:54:18.580056       1 node_controller.go:435] Pool worker: node ip-10-0-150-39.us-west-2.compute.internal is now reporting ready\nI0228 18:54:23.571352       1 status.go:82] Pool worker: All nodes are updated with rendered-worker-87e6cedecd16da69bcfa334f100d26a6\nI0228 18:56:46.431598       1 node_controller.go:442] Pool master: node ip-10-0-141-183.us-west-2.compute.internal has completed update to rendered-master-303b4a72528f162863108c879a014271\nI0228 18:56:46.451706       1 node_controller.go:435] Pool master: node ip-10-0-141-183.us-west-2.compute.internal is now reporting ready\nI0228 18:56:51.431913       1 node_controller.go:758] Setting node ip-10-0-153-236.us-west-2.compute.internal to desired config rendered-master-303b4a72528f162863108c879a014271\nI0228 18:56:51.449755       1 node_controller.go:452] Pool master: node ip-10-0-153-236.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-303b4a72528f162863108c879a014271\nI0228 18:56:52.469010       1 node_controller.go:452] Pool master: node ip-10-0-153-236.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0228 18:56:52.488625       1 node_controller.go:433] Pool master: node ip-10-0-153-236.us-west-2.compute.internal is now reporting unready: node ip-10-0-153-236.us-west-2.compute.internal is reporting Unschedulable\n
Feb 28 18:56:57.468 E ns/openshift-machine-api pod/machine-api-operator-5d4c5b7454-qtchb node/ip-10-0-153-236.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 28 18:56:57.925 E ns/openshift-operator-lifecycle-manager pod/olm-operator-7bd9fbcd8d-987jm node/ip-10-0-153-236.us-west-2.compute.internal container=olm-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:56:58.142 E ns/openshift-console-operator pod/console-operator-78c9f685c6-88d5b node/ip-10-0-153-236.us-west-2.compute.internal container=console-operator container exited with code 255 (Error):        1 status_controller.go:199] Starting StatusSyncer-console\nI0228 18:49:12.651783       1 shared_informer.go:197] Waiting for caches to sync for LoggingSyncer\nI0228 18:49:12.651795       1 management_state_controller.go:102] Starting management-state-controller-console\nI0228 18:49:12.754479       1 shared_informer.go:204] Caches are synced for LoggingSyncer \nI0228 18:49:12.754502       1 base_controller.go:45] Starting #1 worker of LoggingSyncer controller ...\nI0228 18:49:12.829434       1 shared_informer.go:204] Caches are synced for UnsupportedConfigOverridesController \nI0228 18:49:12.829462       1 base_controller.go:45] Starting #1 worker of UnsupportedConfigOverridesController controller ...\nI0228 18:56:56.302611       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:56:56.303429       1 controller.go:70] Shutting down Console\nI0228 18:56:56.305212       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0228 18:56:56.305233       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0228 18:56:56.305243       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0228 18:56:56.305409       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0228 18:56:56.305444       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0228 18:56:56.305487       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0228 18:56:56.305520       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0228 18:56:56.305542       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0228 18:56:56.305620       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0228 18:56:56.305705       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0228 18:56:56.305415       1 builder.go:243] stopped\n
Feb 28 18:56:58.245 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-db89f7f7d-dznfj node/ip-10-0-153-236.us-west-2.compute.internal container=cluster-storage-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 28 18:56:58.351 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-5d5465998d-pcwzx node/ip-10-0-153-236.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): .ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"98d0c983-c390-4d53-86e7-7c7a773db3b7", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-141-183.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-141-183.us-west-2.compute.internal container=\"cluster-policy-controller\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-141-183.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-141-183.us-west-2.compute.internal container=\"kube-controller-manager\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-141-183.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-141-183.us-west-2.compute.internal container=\"kube-controller-manager\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0228 18:51:58.625115       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"98d0c983-c390-4d53-86e7-7c7a773db3b7", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-141-183.us-west-2.compute.internal pods/kube-controller-manager-ip-10-0-141-183.us-west-2.compute.internal container=\"kube-controller-manager\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nI0228 18:56:55.180793       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:56:55.181210       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0228 18:56:55.181665       1 builder.go:209] server exited\n
Feb 28 18:56:58.445 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-5bbc95b4f4-wdpv4 node/ip-10-0-153-236.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\\n\"\nStaticPodsDegraded: nodes/ip-10-0-141-183.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-141-183.us-west-2.compute.internal container=\"kube-apiserver-insecure-readyz\" is not ready\nStaticPodsDegraded: nodes/ip-10-0-141-183.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-141-183.us-west-2.compute.internal container=\"kube-apiserver-insecure-readyz\" is terminated: \"Error\" - \"I0228 18:21:43.665837       1 readyz.go:103] Listening on 0.0.0.0:6080\\n\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-141-183.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-141-183.us-west-2.compute.internal container=\"kube-apiserver\" is not ready"\nI0228 18:52:02.484859       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"a4c22f92-bf8c-4ada-ae26-3e5287a932d3", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-141-183.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-141-183.us-west-2.compute.internal container=\"kube-apiserver\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0228 18:56:55.725532       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:56:55.725715       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0228 18:56:55.725875       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nF0228 18:56:55.725979       1 builder.go:209] server exited\n
Feb 28 18:56:59.328 E ns/openshift-console pod/console-7fc78c9686-bp8ft node/ip-10-0-153-236.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020-02-28T18:27:45Z cmd/main: cookies are secure!\n2020-02-28T18:27:45Z cmd/main: Binding to [::]:8443...\n2020-02-28T18:27:45Z cmd/main: using TLS\n2020-02-28T18:49:24Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-02-28T18:49:54Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Feb 28 18:57:00.449 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7686b999c5-hwkcs node/ip-10-0-153-236.us-west-2.compute.internal container=operator container exited with code 255 (Error): prometheus-k8s\nI0228 18:56:15.837515       1 reflector.go:418] k8s.io/client-go/informers/factory.go:135: Watch close - *v1.Service total 1 items received\nI0228 18:56:17.991263       1 httplog.go:90] GET /metrics: (10.178878ms) 200 [Prometheus/2.15.2 10.129.2.14:49134]\nI0228 18:56:19.089652       1 httplog.go:90] GET /metrics: (1.624064ms) 200 [Prometheus/2.15.2 10.128.2.19:53748]\nI0228 18:56:22.841830       1 reflector.go:418] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.Build total 0 items received\nI0228 18:56:25.857650       1 reflector.go:418] k8s.io/client-go/informers/factory.go:135: Watch close - *v1.ConfigMap total 44 items received\nI0228 18:56:35.040816       1 request.go:565] Throttling request took 161.521527ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0228 18:56:35.240842       1 request.go:565] Throttling request took 195.426496ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0228 18:56:47.987119       1 httplog.go:90] GET /metrics: (5.987662ms) 200 [Prometheus/2.15.2 10.129.2.14:49134]\nI0228 18:56:49.089662       1 httplog.go:90] GET /metrics: (1.688498ms) 200 [Prometheus/2.15.2 10.128.2.19:53748]\nI0228 18:56:55.250520       1 request.go:565] Throttling request took 193.380867ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0228 18:56:59.402834       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:56:59.403530       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0228 18:56:59.403616       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0228 18:56:59.403685       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nF0228 18:56:59.403784       1 builder.go:243] stopped\n
Feb 28 18:57:46.950 E clusteroperator/monitoring changed Degraded to True: UpdatingkubeStateMetricsFailed: Failed to rollout the stack. Error: running task Updating kube-state-metrics failed: reconciling kube-state-metrics ClusterRoleBinding failed: updating ClusterRoleBinding object failed: Put https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kube-state-metrics: read tcp 10.130.0.13:52874->172.30.0.1:443: read: connection reset by peer
Feb 28 18:59:54.554 E ns/openshift-monitoring pod/node-exporter-cqt8w node/ip-10-0-153-236.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -28T18:27:09Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-28T18:27:09Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 28 18:59:54.595 E ns/openshift-controller-manager pod/controller-manager-4k9ng node/ip-10-0-153-236.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): ring watch stream event decoding: unexpected EOF\nI0228 18:49:18.390971       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:49:18.390976       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:49:18.391023       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:49:18.391035       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:49:18.391048       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:49:18.391101       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0228 18:49:18.391656       1 reflector.go:340] k8s.io/client-go/informers/factory.go:135: watch of *v1.ReplicationController ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\nE0228 18:49:18.392958       1 reflector.go:320] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Failed to watch *v1.Proxy: Get https://172.30.0.1:443/apis/config.openshift.io/v1/proxies?allowWatchBookmarks=true&resourceVersion=37419&timeout=8m3s&timeoutSeconds=483&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nW0228 18:56:57.980116       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 39; INTERNAL_ERROR") has prevented the request from succeeding\nW0228 18:56:57.981287       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 33; INTERNAL_ERROR") has prevented the request from succeeding\n
Feb 28 18:59:54.604 E ns/openshift-sdn pod/sdn-controller-sbxn5 node/ip-10-0-153-236.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0228 18:33:50.705067       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 28 18:59:54.630 E ns/openshift-multus pod/multus-admission-controller-x2dsj node/ip-10-0-153-236.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 28 18:59:54.643 E ns/openshift-multus pod/multus-f578r node/ip-10-0-153-236.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 28 18:59:54.664 E ns/openshift-machine-config-operator pod/machine-config-daemon-468cn node/ip-10-0-153-236.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 28 18:59:54.672 E ns/openshift-machine-config-operator pod/machine-config-server-g84s4 node/ip-10-0-153-236.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0228 18:45:17.649910       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-310-g1f107922-dirty (1f1079225b3a5464455047cdfa2af4d471da3597)\nI0228 18:45:17.650857       1 api.go:51] Launching server on :22624\nI0228 18:45:17.650913       1 api.go:51] Launching server on :22623\n
Feb 28 18:59:54.681 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-153-236.us-west-2.compute.internal node/ip-10-0-153-236.us-west-2.compute.internal container=scheduler container exited with code 2 (Error):  get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler"\nE0228 18:25:54.065903       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0228 18:25:54.065936       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0228 18:25:54.065965       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0228 18:25:54.066001       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0228 18:25:54.066033       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0228 18:25:54.066082       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0228 18:25:54.070595       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: unknown (get pods)\nE0228 18:25:54.070639       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0228 18:25:54.070674       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0228 18:25:54.070710       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0228 18:25:54.333742       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0228 18:25:54.333873       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\n
Feb 28 18:59:54.691 E ns/openshift-sdn pod/ovs-qlrf5 node/ip-10-0-153-236.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): 57.901Z|00175|connmgr|INFO|br0<->unix#1211: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:56:57.943Z|00176|bridge|INFO|bridge br0: deleted interface vethccd97031 on port 57\n2020-02-28T18:56:57.600Z|00029|jsonrpc|WARN|unix#1049: send error: Broken pipe\n2020-02-28T18:56:57.601Z|00030|reconnect|WARN|unix#1049: connection dropped (Broken pipe)\n2020-02-28T18:56:58.335Z|00177|connmgr|INFO|br0<->unix#1214: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:56:58.378Z|00178|connmgr|INFO|br0<->unix#1217: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:56:58.417Z|00179|bridge|INFO|bridge br0: deleted interface veth03607582 on port 48\n2020-02-28T18:56:58.828Z|00180|connmgr|INFO|br0<->unix#1221: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:56:58.874Z|00181|connmgr|INFO|br0<->unix#1224: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:56:58.900Z|00182|bridge|INFO|bridge br0: deleted interface veth01c325b0 on port 64\n2020-02-28T18:56:58.949Z|00183|connmgr|INFO|br0<->unix#1227: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:56:59.094Z|00184|connmgr|INFO|br0<->unix#1230: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:56:59.138Z|00185|bridge|INFO|bridge br0: deleted interface veth6ac3567a on port 78\n2020-02-28T18:56:59.335Z|00186|connmgr|INFO|br0<->unix#1233: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:56:59.378Z|00187|connmgr|INFO|br0<->unix#1236: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:56:59.420Z|00188|bridge|INFO|bridge br0: deleted interface veth3aefba89 on port 76\n2020-02-28T18:56:59.771Z|00189|connmgr|INFO|br0<->unix#1242: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-28T18:56:59.801Z|00190|connmgr|INFO|br0<->unix#1245: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-28T18:56:59.822Z|00191|bridge|INFO|bridge br0: deleted interface vethbbcf2fb9 on port 74\n2020-02-28T18:56:59.661Z|00031|jsonrpc|WARN|unix#1081: send error: Broken pipe\n2020-02-28T18:56:59.661Z|00032|reconnect|WARN|unix#1081: connection dropped (Broken pipe)\ninfo: Saving flows ...\nTerminated\n
Feb 28 18:59:54.704 E ns/openshift-etcd pod/etcd-ip-10-0-153-236.us-west-2.compute.internal node/ip-10-0-153-236.us-west-2.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-02-28 18:21:19.681414 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-153-236.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-153-236.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-28 18:21:19.682166 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-02-28 18:21:19.682519 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-153-236.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-resources/secrets/etcd-all-peer/etcd-peer-ip-10-0-153-236.us-west-2.compute.internal.key, ca = /etc/kubernetes/static-pod-resources/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-02-28 18:21:19.684501 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/02/28 18:21:19 grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-v86zwsvl-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.153.236:9978: connect: connection refused". Reconnecting...\n
Feb 28 18:59:54.713 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-236.us-west-2.compute.internal node/ip-10-0-153-236.us-west-2.compute.internal container=kube-apiserver container exited with code 1 (Error): led with: OpenAPI spec does not exist\nI0228 18:56:55.284251       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.\n2020/02/28 18:56:57 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/28 18:56:57 httputil: ReverseProxy read error during body copy: unexpected EOF\nW0228 18:56:57.982020       1 reflector.go:326] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: watch of *v1.Group ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 2841; INTERNAL_ERROR") has prevented the request from succeeding\n2020/02/28 18:56:57 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/28 18:56:57 httputil: ReverseProxy read error during body copy: unexpected EOF\n2020/02/28 18:56:57 httputil: ReverseProxy read error during body copy: unexpected EOF\nE0228 18:57:01.696115       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0228 18:57:01.749716       1 available_controller.go:406] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0228 18:57:11.701198       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0228 18:57:11.701579       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick\nI0228 18:57:11.701600       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0228 18:57:11.701718       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick\n
Feb 28 18:59:54.713 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-236.us-west-2.compute.internal node/ip-10-0-153-236.us-west-2.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0228 18:25:49.809867       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 28 18:59:54.713 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-236.us-west-2.compute.internal node/ip-10-0-153-236.us-west-2.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0228 18:57:10.158353       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:57:10.158596       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0228 18:57:10.589944       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:57:10.590183       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 28 18:59:54.713 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-236.us-west-2.compute.internal node/ip-10-0-153-236.us-west-2.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0228 18:25:48.882240       1 cmd.go:200] Using insecure, self-signed certificates\nI0228 18:25:48.885786       1 crypto.go:580] Generating new CA for cert-regeneration-controller-signer@1582914348 cert, and key in /tmp/serving-cert-125344685/serving-signer.crt, /tmp/serving-cert-125344685/serving-signer.key\nI0228 18:25:49.458027       1 observer_polling.go:155] Starting file observer\nI0228 18:25:54.612591       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nI0228 18:57:11.694618       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0228 18:57:11.694726       1 leaderelection.go:67] leaderelection lost\n
Feb 28 18:59:54.729 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-236.us-west-2.compute.internal node/ip-10-0-153-236.us-west-2.compute.internal container=cluster-policy-controller container exited with code 1 (Error): I0228 18:22:21.001165       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0228 18:22:21.003696       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0228 18:22:21.004420       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Feb 28 18:59:54.729 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-236.us-west-2.compute.internal node/ip-10-0-153-236.us-west-2.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:56:35.698394       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:56:35.698659       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:56:38.549410       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:56:38.549684       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:56:45.707794       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:56:45.708204       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:56:48.558037       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:56:48.558318       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:56:55.724936       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:56:55.725224       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:56:58.576993       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:56:58.577293       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:57:05.733485       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:57:05.733849       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0228 18:57:08.583271       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0228 18:57:08.583569       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Feb 28 18:59:54.729 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-236.us-west-2.compute.internal node/ip-10-0-153-236.us-west-2.compute.internal container=kube-controller-manager container exited with code 2 (Error):     1 replica_set.go:561] Too few replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-68787f8874, need 1, creating 1\nI0228 18:57:02.199112       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\nI0228 18:57:02.222390       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-68787f8874", UID:"d14a79ce-4606-4e2e-8c5d-f96e6a27ef6d", APIVersion:"apps/v1", ResourceVersion:"45547", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-68787f8874-rd7zj\nI0228 18:57:06.791689       1 garbagecollector.go:404] processing item [v1/ConfigMap, namespace: openshift-cluster-node-tuning-operator, name: node-tuning-operator-lock, uid: b1f40232-86b0-4e7b-af6d-483892fb4906]\nE0228 18:57:06.863717       1 memcache.go:199] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0228 18:57:06.881949       1 garbagecollector.go:404] processing item [v1/ConfigMap, namespace: openshift-cluster-storage-operator, name: cluster-storage-operator-lock, uid: c28cea12-dcf4-46d0-80e5-6776b21e5d1a]\nE0228 18:57:06.885941       1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request\nI0228 18:57:06.890625       1 garbagecollector.go:517] delete object [v1/ConfigMap, namespace: openshift-cluster-node-tuning-operator, name: node-tuning-operator-lock, uid: b1f40232-86b0-4e7b-af6d-483892fb4906] with propagation policy Background\nI0228 18:57:06.892260       1 garbagecollector.go:517] delete object [v1/ConfigMap, namespace: openshift-cluster-storage-operator, name: cluster-storage-operator-lock, uid: c28cea12-dcf4-46d0-80e5-6776b21e5d1a] with propagation policy Background\n
Feb 28 18:59:54.729 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-236.us-west-2.compute.internal node/ip-10-0-153-236.us-west-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): connect: connection refused\nE0228 18:25:47.939577       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=23358&timeout=6m22s&timeoutSeconds=382&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0228 18:25:54.069901       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0228 18:25:54.070068       1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cert-recovery-controller-lock: configmaps "cert-recovery-controller-lock" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\nE0228 18:25:54.070096       1 reflector.go:320] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nI0228 18:45:44.945593       1 leaderelection.go:252] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock\nI0228 18:45:44.945929       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"bb7ba57c-9ecf-4899-8d8f-500882840069", APIVersion:"v1", ResourceVersion:"37516", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' abbd92c9-69a0-4bec-ad7d-5f0fa646d0ba became leader\nI0228 18:45:44.949030       1 csrcontroller.go:81] Starting CSR controller\nI0228 18:45:44.949048       1 shared_informer.go:197] Waiting for caches to sync for CSRController\nI0228 18:45:45.049180       1 shared_informer.go:204] Caches are synced for CSRController \nI0228 18:57:11.791081       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0228 18:57:11.791255       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0228 18:57:11.791344       1 builder.go:209] server exited\n
Feb 28 18:59:54.737 E ns/openshift-cluster-node-tuning-operator pod/tuned-7lxgn node/ip-10-0-153-236.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 020-02-28 18:26:36,857 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-02-28 18:26:36,927 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-28 18:26:36,937 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0228 18:45:30.026461    1172 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:45:30.027359    1172 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0228 18:45:30.047652    1172 reflector.go:340] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: watch of *v1.Tuned ended with: very short watch: github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: Unexpected watch close - watch lasted less than a second and no items received\nI0228 18:49:18.392494    1172 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0228 18:49:18.411854    1172 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0228 18:49:18.417812    1172 reflector.go:320] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: Failed to watch *v1.Tuned: Get https://172.30.0.1:443/apis/tuned.openshift.io/v1/namespaces/openshift-cluster-node-tuning-operator/tuneds?allowWatchBookmarks=true&fieldSelector=metadata.name%3Drendered&resourceVersion=37419&timeoutSeconds=364&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0228 18:49:24.974784    1172 tuned.go:494] profile "ip-10-0-153-236.us-west-2.compute.internal" changed, tuned profile requested: openshift-node\nI0228 18:49:25.252832    1172 tuned.go:494] profile "ip-10-0-153-236.us-west-2.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0228 18:49:25.559100    1172 tuned.go:393] getting recommended profile...\nI0228 18:49:25.670454    1172 tuned.go:430] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Feb 28 19:00:01.126 E ns/openshift-multus pod/multus-f578r node/ip-10-0-153-236.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 28 19:00:04.041 E ns/openshift-multus pod/multus-f578r node/ip-10-0-153-236.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 28 19:00:06.946 E ns/openshift-machine-config-operator pod/machine-config-daemon-468cn node/ip-10-0-153-236.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 28 19:00:09.683 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-153-236.us-west-2.compute.internal" not ready since 2020-02-28 18:59:53 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Feb 28 19:00:09.692 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-153-236.us-west-2.compute.internal" not ready since 2020-02-28 18:59:53 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Feb 28 19:00:09.695 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-153-236.us-west-2.compute.internal" not ready since 2020-02-28 18:59:53 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Feb 28 19:00:09.695 E clusteroperator/etcd changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-153-236.us-west-2.compute.internal" not ready since 2020-02-28 18:59:53 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)