ResultSUCCESS
Tests 3 failed / 20 succeeded
Started2020-02-14 13:01
Elapsed1h24m
Work namespaceci-op-np7iik50
Refs release-4.3:b59f3439
292:3d74c453
podfff8af45-4f29-11ea-b4e5-0a58ac106262
repoopenshift/cluster-api-provider-aws
revision1

Test Failures


Cluster upgrade control-plane-upgrade 36m28s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\scontrol\-plane\-upgrade$'
API was unreachable during upgrade for at least 38s:

Feb 14 13:46:19.414 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 14 13:46:20.301 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 14 13:46:20.395 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 13:51:32.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 14 13:51:33.301 - 21s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 14 13:51:54.638 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:02:16.275 E kube-apiserver Kube API started failing: etcdserver: request timed out
Feb 14 14:02:16.301 E kube-apiserver Kube API is not responding to GET requests
Feb 14 14:02:16.373 I kube-apiserver Kube API started responding to GET requests
Feb 14 14:02:27.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 14 14:02:27.392 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:02:46.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 14 14:02:46.393 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:05:48.502 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 14 14:05:49.302 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 14 14:06:04.394 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:06:21.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 14 14:06:21.393 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:06:39.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 14 14:06:39.393 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:09:54.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 14 14:09:54.390 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:10:12.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 14 14:10:12.486 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:10:28.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 14 14:10:28.391 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:10:44.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 14 14:10:44.390 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:11:00.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 14 14:11:00.391 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:11:18.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 14 14:11:18.390 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:11:37.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded
Feb 14 14:11:37.389 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:11:53.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 14 14:11:53.390 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:12:14.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 14 14:12:14.389 I openshift-apiserver OpenShift API started responding to GET requests
Feb 14 14:12:31.301 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 14 14:12:31.392 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1581689772.xml

Filter through log files


Cluster upgrade k8s-service-upgrade 37m59s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sk8s\-service\-upgrade$'
Service was unreachable during upgrade for at least 6s:

Feb 14 13:53:11.443 E ns/e2e-k8s-service-upgrade-869 svc/service-test Service stopped responding to GET requests on reused connections
Feb 14 13:53:12.207 I ns/e2e-k8s-service-upgrade-869 svc/service-test Service started responding to GET requests on reused connections
Feb 14 13:53:37.443 E ns/e2e-k8s-service-upgrade-869 svc/service-test Service stopped responding to GET requests on reused connections
Feb 14 13:53:37.620 I ns/e2e-k8s-service-upgrade-869 svc/service-test Service started responding to GET requests on reused connections
Feb 14 14:02:52.467 E ns/e2e-k8s-service-upgrade-869 svc/service-test Service stopped responding to GET requests over new connections
Feb 14 14:02:52.682 I ns/e2e-k8s-service-upgrade-869 svc/service-test Service started responding to GET requests over new connections
Feb 14 14:08:45.208 E ns/e2e-k8s-service-upgrade-869 svc/service-test Service stopped responding to GET requests over new connections
Feb 14 14:08:45.443 - 1s    E ns/e2e-k8s-service-upgrade-869 svc/service-test Service is not responding to GET requests over new connections
Feb 14 14:08:48.273 I ns/e2e-k8s-service-upgrade-869 svc/service-test Service started responding to GET requests over new connections
Feb 14 14:09:49.443 E ns/e2e-k8s-service-upgrade-869 svc/service-test Service stopped responding to GET requests on reused connections
Feb 14 14:09:49.621 I ns/e2e-k8s-service-upgrade-869 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1581689772.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 38m6s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
290 error level events were detected during this test run:

Feb 14 13:41:04.804 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-6b7f77477b-fvctt node/ip-10-0-138-143.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:42:50.034 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-857d597886-vxlv2 node/ip-10-0-138-143.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error):    1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.RoleBinding ended with: too old resource version: 13374 (15708)\nW0214 13:37:46.104431       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 11925 (15702)\nW0214 13:37:46.104591       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 9082 (15804)\nW0214 13:37:46.107167       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 15049 (15701)\nW0214 13:37:46.110226       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 15457 (16695)\nW0214 13:37:46.110332       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 9074 (15778)\nW0214 13:37:46.110395       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 10614 (15812)\nW0214 13:37:46.110672       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 15688 (16695)\nW0214 13:37:46.110843       1 reflector.go:299] k8s.io/client-go/dynamic/dynamicinformer/informer.go:90: watch of *unstructured.Unstructured ended with: too old resource version: 15790 (15813)\nW0214 13:41:01.477852       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18429 (18589)\nI0214 13:42:48.977547       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0214 13:42:48.977686       1 leaderelection.go:66] leaderelection lost\nF0214 13:42:48.978111       1 builder.go:217] server exited\n
Feb 14 13:43:02.065 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-f9db4cd4d-8zvlr node/ip-10-0-138-143.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): *v1.Scheduler ended with: too old resource version: 5936 (15542)\nW0214 13:35:44.104085       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 9631 (14354)\nW0214 13:35:44.104188       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 14046 (15401)\nW0214 13:35:44.274919       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 11925 (14354)\nW0214 13:35:44.275063       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.RoleBinding ended with: too old resource version: 13374 (14359)\nW0214 13:35:44.277614       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 10636 (14354)\nW0214 13:35:44.288168       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Role ended with: too old resource version: 11608 (14359)\nW0214 13:35:44.288278       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 5286 (15491)\nW0214 13:35:44.367177       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14836 (14947)\nW0214 13:35:44.373670       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 5915 (15401)\nW0214 13:41:01.483698       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18429 (18589)\nI0214 13:43:00.908236       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0214 13:43:00.909229       1 builder.go:217] server exited\nI0214 13:43:00.913130       1 node_controller.go:172] Shutting down NodeController\n
Feb 14 13:44:47.334 E ns/openshift-machine-api pod/machine-api-operator-64cd56cd75-8f2wh node/ip-10-0-138-143.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 14 13:46:26.124 E kube-apiserver Kube API started failing: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Feb 14 13:47:32.169 E ns/openshift-authentication pod/oauth-openshift-6c7ff5dbdc-wd94w node/ip-10-0-138-143.us-west-2.compute.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:42.001 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-6h6dl node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:42.211 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-4b87g node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:42.517 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-rgmgp node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:42.903 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-ndght node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:43.436 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-57qzk node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:44.136 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-cqcn8 node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:44.826 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-fjscv node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:45.342 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-4j9t4 node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:46.330 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-fcdcv node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:46.585 E ns/openshift-insights pod/insights-operator-5459776959-4q997 node/ip-10-0-138-143.us-west-2.compute.internal container=operator container exited with code 2 (Error): rding config/oauth with fingerprint=\nI0214 13:45:24.402144       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0214 13:45:24.404848       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0214 13:45:24.405021       1 diskrecorder.go:170] Writing 37 records to /var/lib/insights-operator/insights-2020-02-14-134524.tar.gz\nI0214 13:45:24.410103       1 diskrecorder.go:134] Wrote 37 records to disk in 5ms\nI0214 13:45:24.410137       1 periodic.go:151] Periodic gather config completed in 68ms\nI0214 13:45:26.433493       1 status.go:298] The operator is healthy\nI0214 13:45:26.433570       1 status.go:373] No status update necessary, objects are identical\nI0214 13:45:26.434182       1 configobserver.go:65] Refreshing configuration from cluster pull secret\nI0214 13:45:26.437690       1 configobserver.go:90] Found cloud.openshift.com token\nI0214 13:45:26.437708       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0214 13:45:32.144036       1 httplog.go:90] GET /metrics: (4.34985ms) 200 [Prometheus/2.14.0 10.128.2.13:53570]\nI0214 13:45:47.338880       1 httplog.go:90] GET /metrics: (4.8187ms) 200 [Prometheus/2.14.0 10.131.0.14:48226]\nI0214 13:46:02.144069       1 httplog.go:90] GET /metrics: (4.301562ms) 200 [Prometheus/2.14.0 10.128.2.13:53570]\nI0214 13:46:17.339596       1 httplog.go:90] GET /metrics: (5.237432ms) 200 [Prometheus/2.14.0 10.131.0.14:48226]\nI0214 13:46:32.156738       1 httplog.go:90] GET /metrics: (16.932742ms) 200 [Prometheus/2.14.0 10.128.2.13:53570]\nI0214 13:46:47.340663       1 httplog.go:90] GET /metrics: (6.690931ms) 200 [Prometheus/2.14.0 10.131.0.14:48226]\nI0214 13:47:02.145856       1 httplog.go:90] GET /metrics: (6.041297ms) 200 [Prometheus/2.14.0 10.128.2.13:53570]\nI0214 13:47:17.339069       1 httplog.go:90] GET /metrics: (5.163495ms) 200 [Prometheus/2.14.0 10.131.0.14:48226]\nI0214 13:47:26.433762       1 status.go:298] The operator is healthy\nI0214 13:47:26.433816       1 status.go:373] No status update necessary, objects are identical\n
Feb 14 13:47:46.878 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-m2gvc node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:47.083 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-k2vp7 node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:55.230 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-xgqrf node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:55.327 E ns/openshift-image-registry pod/node-ca-pth24 node/ip-10-0-138-12.us-west-2.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:55.418 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-nrd55 node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:55.726 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-df85b node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:55.885 E ns/openshift-monitoring pod/node-exporter-qw6tf node/ip-10-0-140-160.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 2-14T13:30:18Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:18Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 14 13:47:56.104 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-7w5qg node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:56.379 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-trtgw node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:56.712 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-gdmhx node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:57.434 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-lcxxp node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:58.016 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-8g8mm node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:58.602 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-sdwtf node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:47:59.438 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-gdkwk node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:00.670 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-hxngq node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:01.000 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-72g56 node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:01.417 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-5h4g7 node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:02.036 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-sfpgj node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:02.700 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-l29v6 node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:03.293 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-84lxd node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:04.063 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-7lg7w node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:04.790 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-c2mch node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:05.828 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-fbghr node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:06.599 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-7pv79 node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:07.549 E ns/openshift-image-registry pod/node-ca-5mmxf node/ip-10-0-138-12.us-west-2.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:07.659 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-bd2hl node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:08.036 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-xq5mz node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:08.838 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-5zjn5 node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:10.005 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-crqcg node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:11.175 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-pw6c4 node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:12.276 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-tf5h5 node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:13.283 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-9g4sp node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:13.740 E ns/openshift-monitoring pod/openshift-state-metrics-84fb469879-j6cpn node/ip-10-0-157-223.us-west-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 14 13:48:14.398 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-xd7qp node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:14.801 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-8p9v5 node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:15.609 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-k8lmr node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:15.961 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-n7kld node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:16.359 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-6fpgq node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:16.677 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-msdst node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:17.411 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-nnhg9 node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:17.824 E ns/openshift-image-registry pod/node-ca-5n2td node/ip-10-0-138-12.us-west-2.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:17.952 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-138-12.us-west-2.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:17.952 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-138-12.us-west-2.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:17.952 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-138-12.us-west-2.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:17.981 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-d4n4b node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:19.035 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-zndz8 node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:19.489 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-nx8xs node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:19.823 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-6vbjx node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:20.811 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-w48n2 node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:21.563 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-kmx9q node/ip-10-0-138-12.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:21.835 E ns/openshift-monitoring pod/kube-state-metrics-59f5965d95-tkv9l node/ip-10-0-157-223.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Feb 14 13:48:22.015 E ns/openshift-monitoring pod/telemeter-client-769c977b66-r4sg5 node/ip-10-0-138-12.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Feb 14 13:48:22.015 E ns/openshift-monitoring pod/telemeter-client-769c977b66-r4sg5 node/ip-10-0-138-12.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Feb 14 13:48:22.379 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-5m4n9 node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:22.690 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-vxqtf node/ip-10-0-139-242.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:22.813 E ns/openshift-monitoring pod/prometheus-adapter-7c8bf59dcd-99qt6 node/ip-10-0-157-223.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0214 13:36:20.597842       1 adapter.go:93] successfully using in-cluster auth\nI0214 13:36:21.103224       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 14 13:48:23.781 E ns/openshift-image-registry pod/image-registry-5d467fb7ff-67nsw node/ip-10-0-157-223.us-west-2.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:23.839 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:23.839 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:23.839 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:23.839 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:23.839 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:23.839 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:23.839 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:27.000 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-58cbzbgxl node/ip-10-0-140-160.us-west-2.compute.internal container=operator container exited with code 255 (Error): e version: 17592 (20728)\nW0214 13:48:24.698956       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 18547 (20811)\nW0214 13:48:24.699086       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 17493 (20727)\nW0214 13:48:24.699174       1 reflector.go:299] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.ServiceCatalogControllerManager ended with: too old resource version: 20719 (20783)\nI0214 13:48:25.657934       1 reflector.go:158] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:134\nI0214 13:48:25.699771       1 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134\nI0214 13:48:25.700155       1 reflector.go:158] Listing and watching *v1.ConfigMap from k8s.io/client-go/informers/factory.go:134\nI0214 13:48:25.700331       1 reflector.go:158] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0214 13:48:25.700459       1 reflector.go:158] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134\nI0214 13:48:25.704561       1 reflector.go:158] Listing and watching *v1.ServiceCatalogControllerManager from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0214 13:48:25.704638       1 reflector.go:158] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134\nI0214 13:48:25.705784       1 reflector.go:158] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:134\nI0214 13:48:26.110443       1 httplog.go:90] GET /metrics: (12.651904ms) 200 [Prometheus/2.14.0 10.131.0.14:51264]\nI0214 13:48:26.153928       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0214 13:48:26.153985       1 leaderelection.go:66] leaderelection lost\n
Feb 14 13:48:27.890 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-796d8766db-825gv node/ip-10-0-138-143.us-west-2.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:35.259 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-153-26.us-west-2.compute.internal node/ip-10-0-153-26.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): geclasses.storage.k8s.io)\nE0214 13:48:34.285442       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0214 13:48:34.285529       1 leaderelection.go:330] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: configmaps "kube-scheduler" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler": RBAC: role.rbac.authorization.k8s.io "system:openshift:sa-listing-configmaps" not found\nE0214 13:48:34.285554       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)\nE0214 13:48:34.285572       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0214 13:48:34.285594       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0214 13:48:34.285612       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0214 13:48:34.285628       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)\nE0214 13:48:34.293746       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0214 13:48:34.302721       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: unknown (get pods)\nE0214 13:48:34.302759       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: unknown (get csinodes.storage.k8s.io)\nI0214 13:48:34.603717       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0214 13:48:34.603817       1 server.go:264] leaderelection lost\n
Feb 14 13:48:36.370 E ns/openshift-image-registry pod/cluster-image-registry-operator-5b8cbf8978-825qd node/ip-10-0-153-26.us-west-2.compute.internal container=cluster-image-registry-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:36.370 E ns/openshift-image-registry pod/cluster-image-registry-operator-5b8cbf8978-825qd node/ip-10-0-153-26.us-west-2.compute.internal container=cluster-image-registry-operator-watch container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:48:44.035 E ns/openshift-monitoring pod/prometheus-adapter-7c8bf59dcd-29wgz node/ip-10-0-138-12.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0214 13:36:14.746914       1 adapter.go:93] successfully using in-cluster auth\nI0214 13:36:15.584896       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 14 13:48:46.610 E ns/openshift-cluster-node-tuning-operator pod/tuned-8bbsp node/ip-10-0-139-242.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 13:48:14.680275    2655 openshift-tuned.go:550] Pod (openshift-image-registry/image-registry-5d467fb7ff-5h4g7) labels changed node wide: false\nI0214 13:48:14.720315    2655 openshift-tuned.go:550] Pod (openshift-image-registry/image-registry-5d467fb7ff-7pv79) labels changed node wide: false\nI0214 13:48:14.763458    2655 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-k8s-1) labels changed node wide: true\nI0214 13:48:15.123737    2655 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 13:48:15.125513    2655 openshift-tuned.go:441] Getting recommended profile...\nI0214 13:48:15.297914    2655 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 13:48:19.119686    2655 openshift-tuned.go:852] Lowering resyncPeriod to 57\nI0214 13:48:20.045492    2655 openshift-tuned.go:550] Pod (openshift-image-registry/image-registry-5d467fb7ff-w48n2) labels changed node wide: true\nI0214 13:48:20.123679    2655 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 13:48:20.125289    2655 openshift-tuned.go:441] Getting recommended profile...\nI0214 13:48:20.309732    2655 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 13:48:22.150876    2655 openshift-tuned.go:550] Pod (openshift-image-registry/image-registry-5d467fb7ff-5m4n9) labels changed node wide: false\nI0214 13:48:22.471835    2655 openshift-tuned.go:550] Pod (openshift-image-registry/image-registry-5d467fb7ff-vxqtf) labels changed node wide: false\nI0214 13:48:24.203192    2655 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0214 13:48:24.205047    2655 openshift-tuned.go:881] Pod event watch channel closed.\nI0214 13:48:24.205067    2655 openshift-tuned.go:883] Increasing resyncPeriod to 114\n
Feb 14 13:48:46.915 E ns/openshift-monitoring pod/node-exporter-gg7wr node/ip-10-0-138-143.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 2-14T13:30:45Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:45Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 14 13:48:47.096 E ns/openshift-cluster-node-tuning-operator pod/tuned-t9b9w node/ip-10-0-140-160.us-west-2.compute.internal container=tuned container exited with code 143 (Error): -ip-10-0-140-160.us-west-2.compute.internal) labels changed node wide: false\nI0214 13:46:54.501846   21900 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-140-160.us-west-2.compute.internal) labels changed node wide: false\nI0214 13:47:29.013477   21900 openshift-tuned.go:550] Pod (openshift-machine-api/cluster-autoscaler-operator-c6499b675-5vqlx) labels changed node wide: true\nI0214 13:47:33.958045   21900 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 13:47:33.959550   21900 openshift-tuned.go:441] Getting recommended profile...\nI0214 13:47:34.057396   21900 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 13:48:03.575368   21900 openshift-tuned.go:550] Pod (openshift-monitoring/node-exporter-qw6tf) labels changed node wide: true\nI0214 13:48:03.965866   21900 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 13:48:03.971747   21900 openshift-tuned.go:441] Getting recommended profile...\nI0214 13:48:04.145468   21900 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 13:48:13.892948   21900 openshift-tuned.go:550] Pod (openshift-console/downloads-6f4d899f5-drfrq) labels changed node wide: true\nI0214 13:48:13.958036   21900 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 13:48:13.959371   21900 openshift-tuned.go:441] Getting recommended profile...\nI0214 13:48:14.142730   21900 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 13:48:24.215414   21900 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0214 13:48:24.221891   21900 openshift-tuned.go:881] Pod event watch channel closed.\nI0214 13:48:24.222002   21900 openshift-tuned.go:883] Increasing resyncPeriod to 134\n
Feb 14 13:48:52.950 E ns/openshift-service-ca-operator pod/service-ca-operator-5f8cf9784-5t4w9 node/ip-10-0-138-143.us-west-2.compute.internal container=operator container exited with code 255 (Error): 
Feb 14 13:48:58.654 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-14T13:48:48.186Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-14T13:48:48.191Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-14T13:48:48.192Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-14T13:48:48.193Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-14T13:48:48.193Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-14T13:48:48.193Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-14T13:48:48.193Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-14T13:48:48.193Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-14T13:48:48.193Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-14T13:48:48.193Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-14T13:48:48.193Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-14T13:48:48.193Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-14T13:48:48.193Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-14T13:48:48.193Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-14T13:48:48.195Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-14T13:48:48.195Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-14
Feb 14 13:49:04.143 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-157-223.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/14 13:36:30 Watching directory: "/etc/alertmanager/config"\n
Feb 14 13:49:04.143 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-157-223.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/14 13:36:30 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/14 13:36:30 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/14 13:36:30 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/14 13:36:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/14 13:36:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/14 13:36:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/14 13:36:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/14 13:36:30 http.go:96: HTTPS: listening on [::]:9095\n
Feb 14 13:49:05.460 E ns/openshift-operator-lifecycle-manager pod/packageserver-5dbbf7fd59-bnm4j node/ip-10-0-140-160.us-west-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:49:08.195 E ns/openshift-monitoring pod/thanos-querier-88d55f85f-fwb88 node/ip-10-0-138-12.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/14 13:37:12 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/14 13:37:12 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/14 13:37:12 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/14 13:37:12 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/14 13:37:12 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/14 13:37:12 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/14 13:37:12 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/14 13:37:12 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/14 13:37:12 http.go:96: HTTPS: listening on [::]:9091\n
Feb 14 13:49:10.758 E ns/openshift-monitoring pod/node-exporter-km587 node/ip-10-0-139-242.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 2-14T13:30:20Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-14T13:30:20Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 14 13:49:13.412 E ns/openshift-console-operator pod/console-operator-f88666b4f-bdxcp node/ip-10-0-153-26.us-west-2.compute.internal container=console-operator container exited with code 255 (Error): sole ended with: too old resource version: 20705 (20797)\nW0214 13:48:24.276794       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 20451 (20722)\nW0214 13:48:24.294755       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20708 (22307)\nW0214 13:48:24.294863       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 17565 (20728)\nW0214 13:48:24.294906       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20707 (22307)\nW0214 13:48:24.298400       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 20708 (22307)\nW0214 13:48:24.298500       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 18588 (20812)\nW0214 13:48:24.300073       1 reflector.go:299] github.com/openshift/client-go/console/informers/externalversions/factory.go:101: watch of *v1.ConsoleCLIDownload ended with: too old resource version: 20705 (20812)\nW0214 13:48:24.429802       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 18547 (20811)\nW0214 13:48:24.540317       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 18588 (20774)\nW0214 13:48:24.611474       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 23884 (24436)\nI0214 13:49:12.645367       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0214 13:49:12.645839       1 leaderelection.go:66] leaderelection lost\n
Feb 14 13:49:14.140 E ns/openshift-marketplace pod/redhat-operators-bc5f9bc9f-84vvv node/ip-10-0-138-12.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Feb 14 13:49:21.119 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-157-223.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-14T13:49:16.109Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-14T13:49:16.113Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-14T13:49:16.113Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-14T13:49:16.114Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-14T13:49:16.114Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-14T13:49:16.114Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-14T13:49:16.114Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-14T13:49:16.114Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-14T13:49:16.115Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-14T13:49:16.115Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-14T13:49:16.115Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-14T13:49:16.115Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-14T13:49:16.115Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-14T13:49:16.115Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-14T13:49:16.115Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-14T13:49:16.115Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-14
Feb 14 13:49:26.735 E ns/openshift-marketplace pod/certified-operators-849c9f55c5-92dxm node/ip-10-0-139-242.us-west-2.compute.internal container=certified-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:49:45.778 E ns/openshift-ingress pod/router-default-7b7d78c8dd-vw9jc node/ip-10-0-139-242.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:48:55.789285       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:49:00.785496       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:49:05.788756       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:49:10.785610       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:49:15.786868       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:49:20.788578       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:49:25.799040       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:49:30.797253       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:49:35.788187       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:49:40.788843       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 14 13:49:45.904 E ns/openshift-controller-manager pod/controller-manager-zfp2f node/ip-10-0-153-26.us-west-2.compute.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:50:02.115 E ns/openshift-service-ca pod/service-serving-cert-signer-6bb489dffd-vdhv9 node/ip-10-0-138-143.us-west-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Feb 14 13:50:02.306 E ns/openshift-service-ca pod/apiservice-cabundle-injector-5bc76bb49d-zhgl8 node/ip-10-0-140-160.us-west-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Feb 14 13:50:36.388 E ns/openshift-controller-manager pod/controller-manager-hgc67 node/ip-10-0-140-160.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Feb 14 13:50:47.715 E ns/openshift-console pod/console-699dfc4d89-slcs7 node/ip-10-0-153-26.us-west-2.compute.internal container=console container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:51:07.467 E ns/openshift-console pod/console-699dfc4d89-7ccd5 node/ip-10-0-140-160.us-west-2.compute.internal container=console container exited with code 2 (Error): /kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/02/14 13:34:34 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/02/14 13:34:44 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/02/14 13:34:54 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com: dial tcp: lookup oauth-openshift.apps.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: no such host\n2020/02/14 13:35:04 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com: dial tcp: lookup oauth-openshift.apps.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: no such host\n2020/02/14 13:35:14 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com: dial tcp: lookup oauth-openshift.apps.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: no such host\n2020/02/14 13:35:24 auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020/02/14 13:35:34 cmd/main: Binding to [::]:8443...\n2020/02/14 13:35:34 cmd/main: using TLS\n
Feb 14 13:51:24.087 - 30s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 14 13:51:24.512 E ns/openshift-sdn pod/sdn-controller-cct8f node/ip-10-0-140-160.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0214 13:21:07.036064       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 14 13:51:26.444 E ns/openshift-controller-manager pod/controller-manager-q97p6 node/ip-10-0-138-143.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): 
Feb 14 13:51:27.565 E ns/openshift-sdn pod/sdn-hxsw2 node/ip-10-0-140-160.us-west-2.compute.internal container=sdn container exited with code 255 (Error): :270] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.128.0.60:8443 10.129.0.72:8443]\nI0214 13:51:06.483971    3169 roundrobin.go:218] Delete endpoint 10.130.0.28:8443 for service "openshift-console/console:https"\nI0214 13:51:06.634687    3169 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0214 13:51:06.742531    3169 proxier.go:371] userspace proxy: processing 0 service events\nI0214 13:51:06.742552    3169 proxier.go:350] userspace syncProxyRules took 107.84737ms\nI0214 13:51:06.742562    3169 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0214 13:51:06.742573    3169 proxy.go:334] hybrid proxy: syncProxyRules start\nI0214 13:51:06.854456    3169 pod.go:539] CNI_DEL openshift-console/console-699dfc4d89-7ccd5\nI0214 13:51:07.005993    3169 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0214 13:51:07.085169    3169 proxier.go:371] userspace proxy: processing 0 service events\nI0214 13:51:07.085187    3169 proxier.go:350] userspace syncProxyRules took 79.175897ms\nI0214 13:51:07.085196    3169 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0214 13:51:14.491800    3169 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.14:6443 10.129.0.3:6443]\nI0214 13:51:14.491833    3169 roundrobin.go:218] Delete endpoint 10.130.0.4:6443 for service "openshift-multus/multus-admission-controller:"\nI0214 13:51:14.491876    3169 proxy.go:334] hybrid proxy: syncProxyRules start\nI0214 13:51:14.653309    3169 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0214 13:51:14.713518    3169 proxier.go:371] userspace proxy: processing 0 service events\nI0214 13:51:14.713534    3169 proxier.go:350] userspace syncProxyRules took 60.211181ms\nI0214 13:51:14.713543    3169 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nF0214 13:51:26.765999    3169 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 14 13:51:45.013 E ns/openshift-multus pod/multus-w42qw node/ip-10-0-139-242.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 14 13:51:45.586 E ns/openshift-multus pod/multus-admission-controller-9m2tf node/ip-10-0-140-160.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 14 13:51:45.634 E ns/openshift-operator-lifecycle-manager pod/packageserver-75d57777bd-98rds node/ip-10-0-140-160.us-west-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:51:48.083 E ns/openshift-sdn pod/sdn-ss4fz node/ip-10-0-139-242.us-west-2.compute.internal container=sdn container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:51:54.875 E ns/openshift-sdn pod/sdn-2mh4z node/ip-10-0-138-143.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ctory)\nI0214 13:51:50.405025    9139 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nW0214 13:51:50.407832    9139 pod.go:274] CNI_ADD openshift-operator-lifecycle-manager/packageserver-7656bc5b55-qsbb5 failed: exit status 1\nI0214 13:51:50.416395    9139 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0214 13:51:50.418970    9139 pod.go:539] CNI_DEL openshift-operator-lifecycle-manager/packageserver-7656bc5b55-qsbb5\nI0214 13:51:50.468761    9139 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0214 13:51:50.471431    9139 pod.go:539] CNI_DEL openshift-operator-lifecycle-manager/packageserver-7656bc5b55-qsbb5\nI0214 13:51:52.468470    9139 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0214 13:51:52.472615    9139 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nW0214 13:51:52.475464    9139 pod.go:274] CNI_ADD openshift-controller-manager/controller-manager-q4lvq failed: exit status 1\nI0214 13:51:52.482190    9139 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0214 13:51:52.484916    9139 pod.go:539] CNI_DEL openshift-controller-manager/controller-manager-q4lvq\nI0214 13:51:52.525199    9139 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0214 13:51:52.527802    9139 pod.go:539] CNI_DEL openshift-controller-manager/controller-manager-q4lvq\nF0214 13:51:54.164861    9139 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 14 13:52:25.840 E ns/openshift-service-ca pod/service-serving-cert-signer-c4ff79575-kczds node/ip-10-0-138-143.us-west-2.compute.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Feb 14 13:52:27.053 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-6fb8d9fcc-h4ggf node/ip-10-0-153-26.us-west-2.compute.internal container=manager container exited with code 1 (Error): -image-registry-gcs\ntime="2020-02-14T13:48:44Z" level=debug msg="ignoring cr as it is for a different cloud" controller=credreq cr=openshift-cloud-credential-operator/openshift-image-registry-gcs secret=openshift-image-registry/installer-cloud-credentials\ntime="2020-02-14T13:48:44Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-image-registry-gcs secret=openshift-image-registry/installer-cloud-credentials\ntime="2020-02-14T13:48:44Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-image-registry-gcs secret=openshift-image-registry/installer-cloud-credentials\ntime="2020-02-14T13:48:44Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-02-14T13:48:44Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-02-14T13:48:44Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-02-14T13:48:44Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-02-14T13:48:44Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-02-14T13:48:44Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-02-14T13:48:44Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\ntime="2020-02-14T13:50:43Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-02-14T13:50:43Z" level=info msg="reconcile complete" controller=metrics elapsed=1.397514ms\ntime="2020-02-14T13:52:26Z" level=error msg="leader election lostunable to run the manager"\n
Feb 14 13:52:37.280 E ns/openshift-multus pod/multus-admission-controller-5vxbm node/ip-10-0-153-26.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 14 13:52:37.560 E ns/openshift-multus pod/multus-sw8qf node/ip-10-0-157-223.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 14 13:52:44.206 E ns/openshift-sdn pod/sdn-t4llm node/ip-10-0-139-242.us-west-2.compute.internal container=sdn container exited with code 255 (Error): :350] userspace syncProxyRules took 66.798861ms\nI0214 13:52:29.496470   75990 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0214 13:52:33.807276   75990 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.128.0.65:5443 10.130.0.58:5443 10.130.0.60:5443]\nI0214 13:52:33.807315   75990 roundrobin.go:218] Delete endpoint 10.130.0.60:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0214 13:52:33.807388   75990 proxy.go:334] hybrid proxy: syncProxyRules start\nI0214 13:52:33.866029   75990 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com: to [10.128.0.65:5443 10.130.0.60:5443]\nI0214 13:52:33.866067   75990 roundrobin.go:218] Delete endpoint 10.130.0.58:5443 for service "openshift-operator-lifecycle-manager/v1-packages-operators-coreos-com:"\nI0214 13:52:33.975850   75990 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0214 13:52:34.043662   75990 proxier.go:371] userspace proxy: processing 0 service events\nI0214 13:52:34.043685   75990 proxier.go:350] userspace syncProxyRules took 67.811361ms\nI0214 13:52:34.043696   75990 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0214 13:52:34.043706   75990 proxy.go:334] hybrid proxy: syncProxyRules start\nI0214 13:52:34.210797   75990 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0214 13:52:34.281405   75990 proxier.go:371] userspace proxy: processing 0 service events\nI0214 13:52:34.281430   75990 proxier.go:350] userspace syncProxyRules took 70.60985ms\nI0214 13:52:34.281441   75990 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0214 13:52:43.982468   75990 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0214 13:52:43.982516   75990 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 14 13:53:20.698 E ns/openshift-multus pod/multus-5v85q node/ip-10-0-138-12.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 14 13:53:37.757 E ns/openshift-sdn pod/sdn-cwk8g node/ip-10-0-138-12.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.66:6443 10.129.0.73:6443 10.130.0.59:6443]\nI0214 13:52:56.890592   14202 roundrobin.go:218] Delete endpoint 10.128.0.66:6443 for service "openshift-multus/multus-admission-controller:"\nI0214 13:52:56.890667   14202 proxy.go:334] hybrid proxy: syncProxyRules start\nI0214 13:52:57.053560   14202 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0214 13:52:57.121932   14202 proxier.go:371] userspace proxy: processing 0 service events\nI0214 13:52:57.121958   14202 proxier.go:350] userspace syncProxyRules took 68.373576ms\nI0214 13:52:57.121974   14202 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0214 13:53:27.122228   14202 proxy.go:334] hybrid proxy: syncProxyRules start\nI0214 13:53:27.338532   14202 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0214 13:53:27.420506   14202 proxier.go:371] userspace proxy: processing 0 service events\nI0214 13:53:27.420539   14202 proxier.go:350] userspace syncProxyRules took 81.977209ms\nI0214 13:53:27.420556   14202 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0214 13:53:36.822041   14202 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-upgrade-869/service-test: to [10.131.0.17:80]\nI0214 13:53:36.822081   14202 roundrobin.go:218] Delete endpoint 10.129.2.17:80 for service "e2e-k8s-service-upgrade-869/service-test:"\nI0214 13:53:36.822140   14202 proxy.go:334] hybrid proxy: syncProxyRules start\nI0214 13:53:36.987542   14202 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0214 13:53:37.055604   14202 proxier.go:371] userspace proxy: processing 0 service events\nI0214 13:53:37.055633   14202 proxier.go:350] userspace syncProxyRules took 68.066374ms\nI0214 13:53:37.055649   14202 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nF0214 13:53:37.289426   14202 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 14 13:53:59.509 E ns/openshift-multus pod/multus-7bzhp node/ip-10-0-153-26.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 14 13:54:40.160 E ns/openshift-multus pod/multus-bhbnj node/ip-10-0-138-143.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 14 13:55:23.093 E ns/openshift-multus pod/multus-dbsr2 node/ip-10-0-140-160.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Feb 14 13:56:12.393 E ns/openshift-machine-config-operator pod/machine-config-operator-6664ffb8f-s5wk2 node/ip-10-0-138-143.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): m/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 15804 (19893)\nW0214 13:46:26.434429       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: too old resource version: 15812 (19896)\nW0214 13:46:26.434542       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 19745 (19805)\nW0214 13:46:26.434664       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRole ended with: too old resource version: 15723 (19798)\nW0214 13:46:26.434752       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRoleBinding ended with: too old resource version: 15724 (19798)\nW0214 13:46:26.434848       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 15710 (19806)\nW0214 13:46:26.434988       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 19789 (20426)\nW0214 13:46:26.435073       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 15804 (19896)\nW0214 13:46:26.435156       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 19768 (19886)\nW0214 13:46:26.435278       1 reflector.go:299] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.CustomResourceDefinition ended with: too old resource version: 18587 (19786)\nW0214 13:46:26.435361       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 15894 (19790)\n
Feb 14 13:58:07.673 E ns/openshift-machine-config-operator pod/machine-config-daemon-qq9cl node/ip-10-0-138-143.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 13:58:10.329 E ns/openshift-machine-config-operator pod/machine-config-daemon-66949 node/ip-10-0-157-223.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 13:58:20.164 E ns/openshift-machine-config-operator pod/machine-config-daemon-bztqt node/ip-10-0-153-26.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 13:58:38.269 E ns/openshift-dns pod/dns-default-4x9mj node/ip-10-0-153-26.us-west-2.compute.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:58:38.269 E ns/openshift-dns pod/dns-default-4x9mj node/ip-10-0-153-26.us-west-2.compute.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 13:58:53.449 E ns/openshift-machine-config-operator pod/machine-config-daemon-fbzmt node/ip-10-0-138-12.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 13:59:02.929 E ns/openshift-machine-config-operator pod/machine-config-daemon-sbpxg node/ip-10-0-139-242.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 14:01:37.686 E ns/openshift-machine-config-operator pod/machine-config-server-mbsk5 node/ip-10-0-153-26.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0214 13:25:17.237561       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0214 13:25:17.238484       1 api.go:51] Launching server on :22624\nI0214 13:25:17.238522       1 api.go:51] Launching server on :22623\nI0214 13:26:46.273643       1 api.go:97] Pool worker requested by 10.0.128.114:43648\n
Feb 14 14:01:40.163 E ns/openshift-machine-config-operator pod/machine-config-server-9sn2q node/ip-10-0-138-143.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0214 13:25:16.897789       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0214 13:25:16.898659       1 api.go:51] Launching server on :22624\nI0214 13:25:16.898722       1 api.go:51] Launching server on :22623\n
Feb 14 14:01:48.036 E ns/openshift-machine-config-operator pod/machine-config-server-nwxmg node/ip-10-0-140-160.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0214 13:25:16.856461       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0214 13:25:16.857535       1 api.go:51] Launching server on :22624\nI0214 13:25:16.857668       1 api.go:51] Launching server on :22623\nI0214 13:26:44.882174       1 api.go:97] Pool worker requested by 10.0.156.6:39627\nI0214 13:26:52.927577       1 api.go:97] Pool worker requested by 10.0.156.6:47754\n
Feb 14 14:01:48.899 E ns/openshift-ingress pod/router-default-bdd84b8f5-jn9l6 node/ip-10-0-157-223.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:58:19.346543       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:58:24.340848       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:58:32.205575       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:58:37.215686       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:58:52.648802       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:58:57.570380       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:59:02.575734       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:59:12.380254       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 13:59:19.982111       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:01:46.990413       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 14 14:01:48.956 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-157-223.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/14 13:49:27 Watching directory: "/etc/alertmanager/config"\n
Feb 14 14:01:48.956 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-157-223.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/14 13:49:27 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/14 13:49:27 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/14 13:49:27 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/14 13:49:27 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/14 13:49:27 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/14 13:49:27 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/14 13:49:27 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/14 13:49:27 http.go:96: HTTPS: listening on [::]:9095\n2020/02/14 13:58:06 server.go:3012: http: TLS handshake error from 10.128.2.21:41582: read tcp 10.131.0.26:9095->10.128.2.21:41582: read: connection reset by peer\n
Feb 14 14:01:49.020 E ns/openshift-monitoring pod/openshift-state-metrics-59fdcc84c6-b2xs2 node/ip-10-0-157-223.us-west-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 14 14:01:49.076 E ns/openshift-cluster-machine-approver pod/machine-approver-65468698f7-gbvgz node/ip-10-0-153-26.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0214 13:48:37.916043       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0214 13:48:37.916064       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0214 13:48:37.916104       1 main.go:236] Starting Machine Approver\nI0214 13:48:38.016309       1 main.go:146] CSR csr-8tplj added\nI0214 13:48:38.016337       1 main.go:149] CSR csr-8tplj is already approved\nI0214 13:48:38.016352       1 main.go:146] CSR csr-99pmc added\nI0214 13:48:38.016358       1 main.go:149] CSR csr-99pmc is already approved\nI0214 13:48:38.016366       1 main.go:146] CSR csr-ghbfm added\nI0214 13:48:38.016372       1 main.go:149] CSR csr-ghbfm is already approved\nI0214 13:48:38.016380       1 main.go:146] CSR csr-gx5dv added\nI0214 13:48:38.016386       1 main.go:149] CSR csr-gx5dv is already approved\nI0214 13:48:38.016395       1 main.go:146] CSR csr-jfs4q added\nI0214 13:48:38.016401       1 main.go:149] CSR csr-jfs4q is already approved\nI0214 13:48:38.016416       1 main.go:146] CSR csr-l7vs5 added\nI0214 13:48:38.016426       1 main.go:149] CSR csr-l7vs5 is already approved\nI0214 13:48:38.016434       1 main.go:146] CSR csr-4thp2 added\nI0214 13:48:38.016440       1 main.go:149] CSR csr-4thp2 is already approved\nI0214 13:48:38.016450       1 main.go:146] CSR csr-52qch added\nI0214 13:48:38.016456       1 main.go:149] CSR csr-52qch is already approved\nI0214 13:48:38.016464       1 main.go:146] CSR csr-8kz6z added\nI0214 13:48:38.016470       1 main.go:149] CSR csr-8kz6z is already approved\nI0214 13:48:38.016476       1 main.go:146] CSR csr-cbxk9 added\nI0214 13:48:38.016482       1 main.go:149] CSR csr-cbxk9 is already approved\nI0214 13:48:38.016489       1 main.go:146] CSR csr-qjglg added\nI0214 13:48:38.016495       1 main.go:149] CSR csr-qjglg is already approved\nI0214 13:48:38.016503       1 main.go:146] CSR csr-sfg77 added\nI0214 13:48:38.016526       1 main.go:149] CSR csr-sfg77 is already approved\n
Feb 14 14:01:49.171 E ns/openshift-monitoring pod/prometheus-adapter-85c5f6845c-kwxq4 node/ip-10-0-157-223.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0214 13:48:42.134642       1 adapter.go:93] successfully using in-cluster auth\nI0214 13:48:43.006736       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 14 14:01:49.281 E ns/openshift-monitoring pod/thanos-querier-67dbc4d74c-j5q9g node/ip-10-0-157-223.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/14 13:49:01 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/14 13:49:01 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/14 13:49:01 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/14 13:49:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/14 13:49:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/14 13:49:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/14 13:49:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/14 13:49:01 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/14 13:49:01 http.go:96: HTTPS: listening on [::]:9091\n
Feb 14 14:01:49.817 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Feb 14 14:01:51.020 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-df46cf599-ckf8m node/ip-10-0-153-26.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:01:52.025 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-689866f95c-7jtc5 node/ip-10-0-153-26.us-west-2.compute.internal container=operator container exited with code 255 (Error): 159.094011ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0214 14:01:07.002608       1 request.go:538] Throttling request took 196.356504ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0214 14:01:08.705728       1 httplog.go:90] GET /metrics: (5.754105ms) 200 [Prometheus/2.14.0 10.131.0.25:49488]\nI0214 14:01:13.686669       1 httplog.go:90] GET /metrics: (1.225027ms) 200 [Prometheus/2.14.0 10.128.2.19:33562]\nI0214 14:01:26.804211       1 request.go:538] Throttling request took 159.171907ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0214 14:01:27.004221       1 request.go:538] Throttling request took 196.152247ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0214 14:01:38.706837       1 httplog.go:90] GET /metrics: (6.870329ms) 200 [Prometheus/2.14.0 10.131.0.25:49488]\nI0214 14:01:43.686580       1 httplog.go:90] GET /metrics: (1.17636ms) 200 [Prometheus/2.14.0 10.128.2.19:33562]\nI0214 14:01:46.803276       1 request.go:538] Throttling request took 158.934417ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0214 14:01:47.002608       1 request.go:538] Throttling request took 194.664005ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0214 14:01:49.006017       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.Network total 0 items received\nI0214 14:01:51.297312       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0214 14:01:51.297352       1 leaderelection.go:66] leaderelection lost\n
Feb 14 14:01:52.088 E ns/openshift-console pod/console-cb445dd98-j94d5 node/ip-10-0-153-26.us-west-2.compute.internal container=console container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:01:53.110 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-6858ccdc98-zq6fb node/ip-10-0-153-26.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:01:55.210 E ns/openshift-machine-config-operator pod/machine-config-controller-85fdf786bb-mldj5 node/ip-10-0-153-26.us-west-2.compute.internal container=machine-config-controller container exited with code 2 (Error): chineConfig  01-master-container-runtime  machineconfiguration.openshift.io/v1  } {MachineConfig  01-master-kubelet  machineconfiguration.openshift.io/v1  } {MachineConfig  99-master-a47a2508-9c75-4e50-bbff-f9f1b1793c6f-registries  machineconfiguration.openshift.io/v1  } {MachineConfig  99-master-ssh  machineconfiguration.openshift.io/v1  }]\nI0214 14:01:40.754239       1 render_controller.go:516] Pool master: now targeting: rendered-master-339531e74def650334dc01f71c551df9\nI0214 14:01:45.745169       1 node_controller.go:758] Setting node ip-10-0-157-223.us-west-2.compute.internal to desired config rendered-worker-ccc318a78207f5773b4baf8558e7f0b1\nI0214 14:01:45.753754       1 node_controller.go:758] Setting node ip-10-0-153-26.us-west-2.compute.internal to desired config rendered-master-339531e74def650334dc01f71c551df9\nI0214 14:01:45.761337       1 node_controller.go:452] Pool worker: node ip-10-0-157-223.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-ccc318a78207f5773b4baf8558e7f0b1\nI0214 14:01:45.774061       1 node_controller.go:452] Pool master: node ip-10-0-153-26.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-339531e74def650334dc01f71c551df9\nI0214 14:01:46.774995       1 node_controller.go:452] Pool worker: node ip-10-0-157-223.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0214 14:01:46.789685       1 node_controller.go:433] Pool worker: node ip-10-0-157-223.us-west-2.compute.internal is now reporting unready: node ip-10-0-157-223.us-west-2.compute.internal is reporting Unschedulable\nI0214 14:01:46.797642       1 node_controller.go:452] Pool master: node ip-10-0-153-26.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0214 14:01:46.822867       1 node_controller.go:433] Pool master: node ip-10-0-153-26.us-west-2.compute.internal is now reporting unready: node ip-10-0-153-26.us-west-2.compute.internal is reporting Unschedulable\n
Feb 14 14:01:55.341 E ns/openshift-service-ca pod/apiservice-cabundle-injector-754c45bf45-69rl4 node/ip-10-0-153-26.us-west-2.compute.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Feb 14 14:01:55.384 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-6867d6d8d6-fjlc6 node/ip-10-0-153-26.us-west-2.compute.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:02:12.339 E ns/openshift-machine-api pod/machine-api-controllers-7bdc7db669-4sccb node/ip-10-0-138-143.us-west-2.compute.internal container=machine-healthcheck-controller container exited with code 255 (Error): 
Feb 14 14:02:12.339 E ns/openshift-machine-api pod/machine-api-controllers-7bdc7db669-4sccb node/ip-10-0-138-143.us-west-2.compute.internal container=machine-controller container exited with code 255 (Error): 
Feb 14 14:02:13.085 E kube-apiserver Kube API started failing: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 14 14:02:16.598 E ns/openshift-marketplace pod/marketplace-operator-548fb9d6f8-8ccfz node/ip-10-0-140-160.us-west-2.compute.internal container=marketplace-operator container exited with code 1 (Error): 
Feb 14 14:02:17.242 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Feb 14 14:02:18.418 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-160.us-west-2.compute.internal node/ip-10-0-140-160.us-west-2.compute.internal container=scheduler container exited with code 255 (Error): ble: CPU<3500m>|Memory<15331892Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0214 14:01:55.621486       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-rgbvc: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0214 14:02:02.447212       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-rgbvc: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0214 14:02:06.748130       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-rgbvc: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0214 14:02:15.678629       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-rgbvc: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nE0214 14:02:16.221926       1 leaderelection.go:330] error retrieving resource lock openshift-kube-scheduler/kube-scheduler: etcdserver: request timed out\nW0214 14:02:17.241708       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: too old resource version: 19789 (35181)\nI0214 14:02:18.013657       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0214 14:02:18.013680       1 server.go:264] leaderelection lost\n
Feb 14 14:02:19.021 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-138-12.us-west-2.compute.internal container=prometheus-proxy container exited with code 1 (Error): 2020/02/14 14:02:14 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/14 14:02:14 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/14 14:02:14 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/14 14:02:14 main.go:138: Invalid configuration:\n  unable to load OpenShift configuration: unable to retrieve authentication information for tokens: Post https://172.30.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 14 14:02:19.021 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-138-12.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-14T14:02:05.619Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-14T14:02:05.623Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-14T14:02:05.623Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-14T14:02:05.625Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-14T14:02:05.625Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-14T14:02:05.625Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-14T14:02:05.625Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-14T14:02:05.625Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-14T14:02:05.625Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-14T14:02:05.625Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-14T14:02:05.625Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-14T14:02:05.625Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-14T14:02:05.625Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-14T14:02:05.625Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-14T14:02:05.626Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-14T14:02:05.626Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-14
Feb 14 14:02:49.639 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusOperatorFailed: Failed to rollout the stack. Error: running task Updating Prometheus Operator failed: reconciling Prometheus Operator Service failed: retrieving Service object failed: etcdserver: request timed out
Feb 14 14:03:42.262 E ns/openshift-marketplace pod/community-operators-5f69df667-p4qqb node/ip-10-0-138-12.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Feb 14 14:03:43.257 E ns/openshift-marketplace pod/certified-operators-dd94b6f89-nbzjs node/ip-10-0-138-12.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Feb 14 14:04:16.089 E ns/openshift-cluster-node-tuning-operator pod/tuned-qdvwc node/ip-10-0-157-223.us-west-2.compute.internal container=tuned container exited with code 143 (Error): ib/tuned/ocp-pod-labels.cfg\nI0214 13:56:36.113447    1571 openshift-tuned.go:441] Getting recommended profile...\nI0214 13:56:36.226833    1571 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 13:58:13.851937    1571 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-66949) labels changed node wide: true\nI0214 13:58:16.111329    1571 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 13:58:16.114469    1571 openshift-tuned.go:441] Getting recommended profile...\nI0214 13:58:16.226798    1571 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 14:01:49.930875    1571 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-67dbc4d74c-j5q9g) labels changed node wide: true\nI0214 14:01:51.111286    1571 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:01:51.113146    1571 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:01:51.224130    1571 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 14:01:53.900386    1571 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-adapter-85c5f6845c-kwxq4) labels changed node wide: true\nI0214 14:01:56.111289    1571 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:01:56.113172    1571 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:01:56.226073    1571 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 14:02:08.305964    1571 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0214 14:02:08.308337    1571 openshift-tuned.go:881] Pod event watch channel closed.\nI0214 14:02:08.308382    1571 openshift-tuned.go:883] Increasing resyncPeriod to 112\n
Feb 14 14:04:16.105 E ns/openshift-monitoring pod/node-exporter-958nl node/ip-10-0-157-223.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 2-14T13:49:09Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-14T13:49:09Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 14 14:04:16.125 E ns/openshift-multus pod/multus-2hk6g node/ip-10-0-157-223.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 14 14:04:16.157 E ns/openshift-sdn pod/ovs-x62f6 node/ip-10-0-157-223.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): dge|INFO|bridge br0: deleted interface veth0aee59c3 on port 7\n2020-02-14T14:01:47.973Z|00143|connmgr|INFO|br0<->unix#496: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:48.009Z|00144|connmgr|INFO|br0<->unix#499: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:48.041Z|00145|bridge|INFO|bridge br0: deleted interface veth44faa350 on port 13\n2020-02-14T14:01:48.145Z|00146|connmgr|INFO|br0<->unix#502: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:48.216Z|00147|connmgr|INFO|br0<->unix#505: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:48.253Z|00148|bridge|INFO|bridge br0: deleted interface vethf978e47e on port 14\n2020-02-14T14:01:48.299Z|00149|connmgr|INFO|br0<->unix#508: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:48.356Z|00150|connmgr|INFO|br0<->unix#511: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:48.393Z|00151|bridge|INFO|bridge br0: deleted interface veth377d2a31 on port 9\n2020-02-14T14:01:48.449Z|00152|connmgr|INFO|br0<->unix#514: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:48.501Z|00153|connmgr|INFO|br0<->unix#517: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:48.533Z|00154|bridge|INFO|bridge br0: deleted interface vetha5b7c018 on port 10\n2020-02-14T14:01:48.586Z|00155|connmgr|INFO|br0<->unix#522: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:48.625Z|00156|connmgr|INFO|br0<->unix#525: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:48.647Z|00157|bridge|INFO|bridge br0: deleted interface veth1881c0c3 on port 3\n2020-02-14T14:02:17.731Z|00158|connmgr|INFO|br0<->unix#547: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:02:17.758Z|00159|connmgr|INFO|br0<->unix#550: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:02:17.778Z|00160|bridge|INFO|bridge br0: deleted interface veth83b4bf87 on port 6\n2020-02-14T14:02:17.768Z|00025|jsonrpc|WARN|unix#489: receive error: Connection reset by peer\n2020-02-14T14:02:17.768Z|00026|reconnect|WARN|unix#489: connection dropped (Connection reset by peer)\nTerminated\n
Feb 14 14:04:16.192 E ns/openshift-machine-config-operator pod/machine-config-daemon-hvf5p node/ip-10-0-157-223.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 14:04:20.764 E ns/openshift-multus pod/multus-2hk6g node/ip-10-0-157-223.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 14 14:04:26.328 E ns/openshift-machine-config-operator pod/machine-config-daemon-hvf5p node/ip-10-0-157-223.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 14 14:04:40.774 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal node/ip-10-0-153-26.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error):      1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0214 14:02:07.823130       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0214 14:02:07.823511       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0214 14:02:07.885045       1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io\nI0214 14:02:07.985672       1 log.go:172] httputil: ReverseProxy read error during body copy: read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer\nI0214 14:02:07.985996       1 log.go:172] httputil: ReverseProxy read error during body copy: read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer\nI0214 14:02:07.986153       1 log.go:172] httputil: ReverseProxy read error during body copy: read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer\nE0214 14:02:07.986358       1 controller.go:114] loading OpenAPI spec for "v1.user.openshift.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error trying to reach service: 'read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer', Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]\nI0214 14:02:07.986370       1 controller.go:127] OpenAPI AggregationController: action for item v1.user.openshift.io: Rate Limited Requeue.\nI0214 14:02:07.986406       1 log.go:172] httputil: ReverseProxy read error during body copy: read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer\nI0214 14:02:07.986608       1 log.go:172] httputil: ReverseProxy read error during body copy: read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer\n
Feb 14 14:04:40.774 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal node/ip-10-0-153-26.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0214 13:48:29.793841       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 14 14:04:40.774 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal node/ip-10-0-153-26.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0214 13:58:34.385738       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 13:58:34.386060       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0214 13:58:34.593950       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 13:58:34.594199       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 14 14:04:40.859 E ns/openshift-monitoring pod/node-exporter-nlvhz node/ip-10-0-153-26.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 2-14T13:48:45Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:45Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 14 14:04:40.870 E ns/openshift-cluster-node-tuning-operator pod/tuned-qgmjs node/ip-10-0-153-26.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 14 14:01:51.130943     784 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-5-ip-10-0-153-26.us-west-2.compute.internal) labels changed node wide: false\nI0214 14:01:51.489326     784 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-4-ip-10-0-153-26.us-west-2.compute.internal) labels changed node wide: false\nI0214 14:01:51.885772     784 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-6-ip-10-0-153-26.us-west-2.compute.internal) labels changed node wide: false\nI0214 14:01:52.285292     784 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-5-ip-10-0-153-26.us-west-2.compute.internal) labels changed node wide: false\nI0214 14:01:52.440416     784 openshift-tuned.go:550] Pod (openshift-cloud-credential-operator/cloud-credential-operator-6fb8d9fcc-h4ggf) labels changed node wide: true\nI0214 14:01:57.225353     784 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:01:57.226694     784 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:01:57.334293     784 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 14:02:02.423285     784 openshift-tuned.go:550] Pod (openshift-insights/insights-operator-744996cb69-sbpx8) labels changed node wide: true\nI0214 14:02:07.226389     784 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:02:07.227704     784 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:02:07.343267     784 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 14:02:07.752718     784 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-153-26.us-west-2.compute.internal) labels changed node wide: true\nI0214 14:02:07.809192     784 openshift-tuned.go:137] Received signal: terminated\nI0214 14:02:07.810662     784 openshift-tuned.go:304] Sending TERM to PID 1096\n
Feb 14 14:04:40.903 E ns/openshift-sdn pod/sdn-controller-n9v9z node/ip-10-0-153-26.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0214 13:51:47.496309       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0214 13:51:47.512518       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"fdedfd2d-482e-42c0-951d-0de984cd6e0e", ResourceVersion:"29264", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717283266, loc:(*time.Location)(0x2b77ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-153-26\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-14T13:21:06Z\",\"renewTime\":\"2020-02-14T13:51:47Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-153-26 became leader'\nI0214 13:51:47.512590       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0214 13:51:47.518186       1 master.go:51] Initializing SDN master\nI0214 13:51:47.530440       1 network_controller.go:60] Started OpenShift Network Controller\n
Feb 14 14:04:40.910 E ns/openshift-controller-manager pod/controller-manager-4rf8d node/ip-10-0-153-26.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 14 14:04:40.924 E ns/openshift-sdn pod/ovs-wggp9 node/ip-10-0-153-26.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): |INFO|bridge br0: deleted interface veth59d04ce4 on port 16\n2020-02-14T14:01:53.369Z|00179|connmgr|INFO|br0<->unix#700: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:53.407Z|00180|connmgr|INFO|br0<->unix#703: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:53.439Z|00181|bridge|INFO|bridge br0: deleted interface veth34f92491 on port 22\n2020-02-14T14:01:53.665Z|00182|connmgr|INFO|br0<->unix#708: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:53.714Z|00183|connmgr|INFO|br0<->unix#711: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:53.745Z|00184|bridge|INFO|bridge br0: deleted interface vethb856f4cb on port 17\n2020-02-14T14:01:53.944Z|00185|connmgr|INFO|br0<->unix#714: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:53.984Z|00186|connmgr|INFO|br0<->unix#717: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:54.005Z|00187|bridge|INFO|bridge br0: deleted interface veth22a4e65d on port 11\n2020-02-14T14:01:54.302Z|00188|connmgr|INFO|br0<->unix#720: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:54.343Z|00189|connmgr|INFO|br0<->unix#723: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:54.377Z|00190|bridge|INFO|bridge br0: deleted interface veth5c54f12a on port 15\n2020-02-14T14:01:54.972Z|00013|jsonrpc|WARN|unix#639: receive error: Connection reset by peer\n2020-02-14T14:01:54.972Z|00014|reconnect|WARN|unix#639: connection dropped (Connection reset by peer)\n2020-02-14T14:01:54.548Z|00191|connmgr|INFO|br0<->unix#726: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:54.661Z|00192|connmgr|INFO|br0<->unix#729: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:54.690Z|00193|bridge|INFO|bridge br0: deleted interface veth586f592f on port 30\n2020-02-14T14:01:54.927Z|00194|connmgr|INFO|br0<->unix#732: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:01:54.955Z|00195|connmgr|INFO|br0<->unix#735: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:01:55.048Z|00196|bridge|INFO|bridge br0: deleted interface vethf38b5d7f on port 9\nTerminated\n
Feb 14 14:04:40.930 E ns/openshift-multus pod/multus-admission-controller-j286v node/ip-10-0-153-26.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 14 14:04:40.941 E ns/openshift-multus pod/multus-q8vgb node/ip-10-0-153-26.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 14 14:04:40.952 E ns/openshift-machine-config-operator pod/machine-config-daemon-dhmz9 node/ip-10-0-153-26.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 14:04:40.974 E ns/openshift-machine-config-operator pod/machine-config-server-lbnz6 node/ip-10-0-153-26.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0214 14:01:39.472175       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0214 14:01:39.473027       1 api.go:51] Launching server on :22624\nI0214 14:01:39.473070       1 api.go:51] Launching server on :22623\n
Feb 14 14:04:45.938 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-26.us-west-2.compute.internal node/ip-10-0-153-26.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): I0214 13:45:24.304031       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0214 13:45:24.307130       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0214 13:45:24.307841       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nE0214 13:48:34.229564       1 leaderelection.go:306] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: configmaps "cluster-policy-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "openshift-kube-controller-manager"\n
Feb 14 14:04:45.938 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-26.us-west-2.compute.internal node/ip-10-0-153-26.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:00:56.240767       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:00:56.241075       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:01:06.249139       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:01:06.249513       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:01:16.257011       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:01:16.262453       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:01:26.264922       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:01:26.265195       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:01:36.272940       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:01:36.273208       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:01:46.282155       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:01:46.282452       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:01:56.289813       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:01:56.290042       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:02:06.303819       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:02:06.304128       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Feb 14 14:04:45.938 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-153-26.us-west-2.compute.internal node/ip-10-0-153-26.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error): cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1581686677" (2020-02-14 13:24:53 +0000 UTC to 2022-02-13 13:24:54 +0000 UTC (now=2020-02-14 13:45:20.962704758 +0000 UTC))\nI0214 13:45:20.963052       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1581687920" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1581687920" (2020-02-14 12:45:20 +0000 UTC to 2021-02-13 12:45:20 +0000 UTC (now=2020-02-14 13:45:20.963033762 +0000 UTC))\nI0214 13:45:20.963171       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1581687920" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1581687920" (2020-02-14 12:45:20 +0000 UTC to 2021-02-13 12:45:20 +0000 UTC (now=2020-02-14 13:45:20.963157349 +0000 UTC))\nI0214 13:45:20.963197       1 secure_serving.go:178] Serving securely on [::]:10257\nI0214 13:45:20.963237       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0214 13:45:20.964039       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\nI0214 13:45:20.964128       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nE0214 13:48:26.275292       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\n
Feb 14 14:04:46.007 E ns/openshift-multus pod/multus-q8vgb node/ip-10-0-153-26.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 14 14:04:46.041 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-153-26.us-west-2.compute.internal node/ip-10-0-153-26.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): 8 +0000 UTC))\nI0214 13:48:37.356909       1 tlsconfig.go:179] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1581686675" [] issuer="kubelet-signer" (2020-02-14 13:24:35 +0000 UTC to 2020-02-15 13:05:13 +0000 UTC (now=2020-02-14 13:48:37.356897526 +0000 UTC))\nI0214 13:48:37.356934       1 tlsconfig.go:179] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-02-14 13:05:11 +0000 UTC to 2020-02-15 13:05:11 +0000 UTC (now=2020-02-14 13:48:37.356923616 +0000 UTC))\nI0214 13:48:37.357227       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1581686677" (2020-02-14 13:24:48 +0000 UTC to 2022-02-13 13:24:49 +0000 UTC (now=2020-02-14 13:48:37.357213193 +0000 UTC))\nI0214 13:48:37.357954       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1581688117" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1581688117" (2020-02-14 12:48:37 +0000 UTC to 2021-02-13 12:48:37 +0000 UTC (now=2020-02-14 13:48:37.357937825 +0000 UTC))\nI0214 13:48:37.358064       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1581688117" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1581688117" (2020-02-14 12:48:37 +0000 UTC to 2021-02-13 12:48:37 +0000 UTC (now=2020-02-14 13:48:37.358050541 +0000 UTC))\n
Feb 14 14:04:46.550 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady::StaticPods_Error: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-153-26.us-west-2.compute.internal" not ready since 2020-02-14 14:04:39 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-153-26.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal container="kube-apiserver-7" is not ready\nStaticPodsDegraded: nodes/ip-10-0-153-26.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal container="kube-apiserver-7" is terminated: "Error" - "     1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0214 14:02:07.823130       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0214 14:02:07.823511       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0214 14:02:07.885045       1 controller.go:107] OpenAPI AggregationController: Processing item v1.user.openshift.io\nI0214 14:02:07.985672       1 log.go:172] httputil: ReverseProxy read error during body copy: read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer\nI0214 14:02:07.985996       1 log.go:172] httputil: ReverseProxy read error during body copy: read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer\nI0214 14:02:07.986153       1 log.go:172] httputil: ReverseProxy read error during body copy: read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer\nE0214 14:02:07.986358       1 controller.go:114] loading OpenAPI spec for \"v1.user.openshift.io\" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: Error trying to reach service: 'read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer', Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]\nI0214 14:02:07.986370       1 controller.go:127] OpenAPI AggregationController: action for item v1.user.openshift.io: Rate Limited Requeue.\nI0214 14:02:07.986406       1 log.go:172] httputil: ReverseProxy read error during body copy: read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer\nI0214 14:02:07.986608       1 log.go:172] httputil: ReverseProxy read error during body copy: read tcp 10.129.0.1:52412->10.129.0.50:8443: read: connection reset by peer\n"\nStaticPodsDegraded: nodes/ip-10-0-153-26.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal container="kube-apiserver-cert-syncer-7" is not ready\nStaticPodsDegraded: nodes/ip-10-0-153-26.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal container="kube-apiserver-cert-syncer-7" is terminated: "Error" - "network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0214 13:58:34.385738       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 13:58:34.386060       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0214 13:58:34.593950       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 13:58:34.594199       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n"\nStaticPodsDegraded: nodes/ip-10-0-153-26.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal container="kube-apiserver-insecure-readyz-7" is not ready\nStaticPodsDegraded: nodes/ip-10-0-153-26.us-west-2.compute.internal pods/kube-apiserver-ip-10-0-153-26.us-west-2.compute.internal container="kube-apiserver-insecure-readyz-7" is terminated: "Error" - "I0214 13:48:29.793841       1 readyz.go:103] Listening on 0.0.0.0:6080\n"
Feb 14 14:04:50.706 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 14 14:04:51.273 E ns/openshift-machine-config-operator pod/machine-config-daemon-dhmz9 node/ip-10-0-153-26.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 14 14:05:04.634 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-138-12.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/14 14:02:10 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 14 14:05:04.634 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-138-12.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/14 14:02:19 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/14 14:02:19 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/14 14:02:19 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/14 14:02:19 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/14 14:02:19 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/14 14:02:19 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/14 14:02:19 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/14 14:02:19 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/14 14:02:19 http.go:96: HTTPS: listening on [::]:9091\n
Feb 14 14:05:04.634 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-138-12.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-14T14:02:10.112204377Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2020-02-14T14:02:10.112340914Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-14T14:02:10.138155512Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-02-14T14:02:15.114227231Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-14T14:02:20.278649103Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 14 14:05:28.394 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-157-223.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-14T14:05:26.622Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-14T14:05:26.629Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-14T14:05:26.630Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-14T14:05:26.632Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-14T14:05:26.632Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-14T14:05:26.632Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-14T14:05:26.632Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-14T14:05:26.632Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-14T14:05:26.632Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-14T14:05:26.632Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-14T14:05:26.632Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-14T14:05:26.632Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-14T14:05:26.632Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-14T14:05:26.632Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-14T14:05:26.634Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-14T14:05:26.634Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-14
Feb 14 14:05:29.942 E ns/openshift-monitoring pod/prometheus-operator-6c8db4bf84-jr5bl node/ip-10-0-138-143.us-west-2.compute.internal container=prometheus-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:05:31.094 E ns/openshift-machine-api pod/machine-api-operator-768897f94b-bxkck node/ip-10-0-138-143.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 14 14:05:31.202 E ns/openshift-machine-config-operator pod/machine-config-operator-7fffb697bc-tdhcp node/ip-10-0-138-143.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-machine-config-operator_machine-config-operator-7fffb697bc-tdhcp_0895a211-547b-40db-b4d4-28c29688e013/machine-config-operator/0.log": lstat /var/log/pods/openshift-machine-config-operator_machine-config-operator-7fffb697bc-tdhcp_0895a211-547b-40db-b4d4-28c29688e013/machine-config-operator/0.log: no such file or directory
Feb 14 14:05:31.219 E ns/openshift-machine-api pod/machine-api-controllers-7bdc7db669-4sccb node/ip-10-0-138-143.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 14 14:05:31.313 E ns/openshift-monitoring pod/thanos-querier-67dbc4d74c-tt55z node/ip-10-0-138-143.us-west-2.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:05:31.313 E ns/openshift-monitoring pod/thanos-querier-67dbc4d74c-tt55z node/ip-10-0-138-143.us-west-2.compute.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:05:31.313 E ns/openshift-monitoring pod/thanos-querier-67dbc4d74c-tt55z node/ip-10-0-138-143.us-west-2.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:05:31.313 E ns/openshift-monitoring pod/thanos-querier-67dbc4d74c-tt55z node/ip-10-0-138-143.us-west-2.compute.internal container=thanos-querier container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:05:32.223 E ns/openshift-console pod/console-cb445dd98-9znmj node/ip-10-0-138-143.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/02/14 13:50:38 cmd/main: cookies are secure!\n2020/02/14 13:50:38 cmd/main: Binding to [::]:8443...\n2020/02/14 13:50:38 cmd/main: using TLS\n
Feb 14 14:05:32.302 E ns/openshift-cluster-machine-approver pod/machine-approver-65468698f7-j6qr7 node/ip-10-0-138-143.us-west-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0214 14:01:52.854396       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0214 14:01:52.854420       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0214 14:01:52.854459       1 main.go:236] Starting Machine Approver\nI0214 14:01:52.954661       1 main.go:146] CSR csr-sfg77 added\nI0214 14:01:52.954688       1 main.go:149] CSR csr-sfg77 is already approved\nI0214 14:01:52.954702       1 main.go:146] CSR csr-8kz6z added\nI0214 14:01:52.954706       1 main.go:149] CSR csr-8kz6z is already approved\nI0214 14:01:52.954711       1 main.go:146] CSR csr-8tplj added\nI0214 14:01:52.954715       1 main.go:149] CSR csr-8tplj is already approved\nI0214 14:01:52.954725       1 main.go:146] CSR csr-qjglg added\nI0214 14:01:52.954731       1 main.go:149] CSR csr-qjglg is already approved\nI0214 14:01:52.954738       1 main.go:146] CSR csr-cbxk9 added\nI0214 14:01:52.954742       1 main.go:149] CSR csr-cbxk9 is already approved\nI0214 14:01:52.954746       1 main.go:146] CSR csr-ghbfm added\nI0214 14:01:52.954750       1 main.go:149] CSR csr-ghbfm is already approved\nI0214 14:01:52.954754       1 main.go:146] CSR csr-gx5dv added\nI0214 14:01:52.954758       1 main.go:149] CSR csr-gx5dv is already approved\nI0214 14:01:52.954762       1 main.go:146] CSR csr-jfs4q added\nI0214 14:01:52.954766       1 main.go:149] CSR csr-jfs4q is already approved\nI0214 14:01:52.954771       1 main.go:146] CSR csr-l7vs5 added\nI0214 14:01:52.954774       1 main.go:149] CSR csr-l7vs5 is already approved\nI0214 14:01:52.954779       1 main.go:146] CSR csr-4thp2 added\nI0214 14:01:52.954782       1 main.go:149] CSR csr-4thp2 is already approved\nI0214 14:01:52.954787       1 main.go:146] CSR csr-52qch added\nI0214 14:01:52.954791       1 main.go:149] CSR csr-52qch is already approved\nI0214 14:01:52.954797       1 main.go:146] CSR csr-99pmc added\nI0214 14:01:52.954803       1 main.go:149] CSR csr-99pmc is already approved\n
Feb 14 14:05:32.422 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-7569c7bdf-qg6g9 node/ip-10-0-138-143.us-west-2.compute.internal container=operator container exited with code 255 (Error): onGoRestful\nI0214 14:04:41.411926       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0214 14:04:41.413038       1 httplog.go:90] GET /metrics: (4.416861ms) 200 [Prometheus/2.14.0 10.128.2.19:37508]\nI0214 14:04:43.378621       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0214 14:04:53.385464       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0214 14:04:59.191723       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0214 14:04:59.191747       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0214 14:04:59.192921       1 httplog.go:90] GET /metrics: (5.176381ms) 200 [Prometheus/2.14.0 10.129.2.34:41566]\nI0214 14:05:03.529796       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0214 14:05:11.412319       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0214 14:05:11.412340       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0214 14:05:11.413667       1 httplog.go:90] GET /metrics: (5.165331ms) 200 [Prometheus/2.14.0 10.128.2.19:37508]\nI0214 14:05:13.540833       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0214 14:05:23.549451       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0214 14:05:29.748626       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0214 14:05:29.748678       1 leaderelection.go:66] leaderelection lost\n
Feb 14 14:05:34.229 E ns/openshift-service-ca pod/configmap-cabundle-injector-6d4479c677-sz2dc node/ip-10-0-138-143.us-west-2.compute.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Feb 14 14:05:34.636 E ns/openshift-operator-lifecycle-manager pod/packageserver-7656bc5b55-t69zq node/ip-10-0-153-26.us-west-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:06:38.949 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: reconciling Grafana ClusterRole failed: updating ClusterRole object failed: Put https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/grafana: read tcp 10.130.0.82:58326->172.30.0.1:443: read: connection reset by peer
Feb 14 14:07:27.989 E ns/openshift-monitoring pod/node-exporter-6g86k node/ip-10-0-138-12.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 2-14T13:48:22Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:22Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 14 14:07:28.042 E ns/openshift-multus pod/multus-mnmh9 node/ip-10-0-138-12.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 14 14:07:28.064 E ns/openshift-machine-config-operator pod/machine-config-daemon-jh885 node/ip-10-0-138-12.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 14:07:28.093 E ns/openshift-cluster-node-tuning-operator pod/tuned-s67k5 node/ip-10-0-138-12.us-west-2.compute.internal container=tuned container exited with code 143 (Error): openshift-tuned.go:550] Pod (openshift-marketplace/community-operators-5f69df667-p4qqb) labels changed node wide: true\nI0214 14:03:47.954975     576 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:03:47.956586     576 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:03:48.090516     576 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 14:05:02.115300     576 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-603/dp-657fc4b57d-58gdc) labels changed node wide: true\nI0214 14:05:02.957054     576 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:05:02.970816     576 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:05:03.255936     576 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 14:05:17.143758     576 openshift-tuned.go:550] Pod (openshift-image-registry/image-registry-5d467fb7ff-n5454) labels changed node wide: true\nI0214 14:05:17.954984     576 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:05:17.956502     576 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:05:18.069871     576 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 14:05:34.052211     576 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-4415/foo-sbsbw) labels changed node wide: false\nI0214 14:05:37.144286     576 openshift-tuned.go:550] Pod (e2e-k8s-service-upgrade-869/service-test-5k4gv) labels changed node wide: true\nI0214 14:05:37.841109     576 openshift-tuned.go:137] Received signal: terminated\nI0214 14:05:37.841185     576 openshift-tuned.go:304] Sending TERM to PID 1040\n2020-02-14 14:05:37,841 INFO     tuned.daemon.controller: terminating controller\n2020-02-14 14:05:37,842 INFO     tuned.daemon.daemon: stopping tuning\n
Feb 14 14:07:38.611 E ns/openshift-machine-config-operator pod/machine-config-daemon-jh885 node/ip-10-0-138-12.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 14 14:07:45.273 E ns/openshift-marketplace pod/community-operators-5cc68649d7-4x8nm node/ip-10-0-139-242.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Feb 14 14:07:45.314 E ns/openshift-ingress pod/router-default-bdd84b8f5-2xhnj node/ip-10-0-139-242.us-west-2.compute.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:06:02.752241       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:06:07.744468       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:06:20.992672       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:06:25.948760       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:06:30.955137       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:07:21.910037       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:07:26.906771       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:07:32.464859       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:07:37.464760       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0214 14:07:42.462699       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 14 14:07:45.343 E ns/openshift-monitoring pod/thanos-querier-67dbc4d74c-cs8ld node/ip-10-0-139-242.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/14 14:01:55 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/14 14:01:55 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/14 14:01:55 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/14 14:01:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/14 14:01:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/14 14:01:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/14 14:01:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/14 14:01:55 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/14 14:01:55 http.go:96: HTTPS: listening on [::]:9091\n
Feb 14 14:07:45.353 E ns/openshift-marketplace pod/redhat-operators-88764ff9b-kzgwt node/ip-10-0-139-242.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Feb 14 14:07:45.369 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/14 13:48:56 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 14 14:07:45.369 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): ount:openshift-monitoring:prometheus-k8s\n2020/02/14 13:48:57 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/14 13:48:57 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/14 13:48:57 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/14 13:48:57 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/14 13:48:57 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/14 13:48:57 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/14 13:48:57 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/14 13:48:57 http.go:96: HTTPS: listening on [::]:9091\n2020/02/14 13:49:17 oauthproxy.go:774: basicauth: 10.131.0.20:48402 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/14 13:50:17 oauthproxy.go:774: basicauth: 10.131.0.20:49824 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/14 13:51:17 oauthproxy.go:774: basicauth: 10.131.0.20:50720 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/14 13:53:12 reverseproxy.go:447: http: proxy error: context canceled\n2020/02/14 13:56:03 oauthproxy.go:774: basicauth: 10.131.0.20:54908 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/14 14:00:33 oauthproxy.go:774: basicauth: 10.131.0.20:58770 Authorization header does not start with 'Basic', skipping basic authentication\n202
Feb 14 14:07:45.369 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-242.us-west-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-14T13:48:52.145880947Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2020-02-14T13:48:52.146051772Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-14T13:48:52.14814786Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-02-14T13:48:57.147920261Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-14T13:49:02.318854592Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 14 14:07:46.340 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-139-242.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/02/14 13:49:42 Watching directory: "/etc/alertmanager/config"\n
Feb 14 14:07:46.340 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-139-242.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/14 13:49:43 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/14 13:49:43 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/14 13:49:43 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/14 13:49:43 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/14 13:49:43 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/14 13:49:43 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/14 13:49:43 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/14 13:49:43 http.go:96: HTTPS: listening on [::]:9095\n
Feb 14 14:07:46.434 E ns/openshift-monitoring pod/prometheus-adapter-85c5f6845c-5cwvn node/ip-10-0-139-242.us-west-2.compute.internal container=prometheus-adapter container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:08:05.799 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-138-12.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-14T14:08:03.889Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-14T14:08:03.902Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-14T14:08:03.903Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-14T14:08:03.904Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-14T14:08:03.904Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-14T14:08:03.905Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-14T14:08:03.905Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-14T14:08:03.905Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-14T14:08:03.905Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-14T14:08:03.905Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-14T14:08:03.905Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-14T14:08:03.905Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-14T14:08:03.905Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-14T14:08:03.906Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-14T14:08:03.906Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-14T14:08:03.907Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-14
Feb 14 14:08:28.981 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-138-143.us-west-2.compute.internal" not ready since 2020-02-14 14:08:28 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Feb 14 14:08:28.987 E clusteroperator/kube-controller-manager changed Degraded to True: NodeControllerDegradedMasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-138-143.us-west-2.compute.internal" not ready since 2020-02-14 14:08:28 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Feb 14 14:08:29.002 E clusteroperator/kube-scheduler changed Degraded to True: NodeControllerDegradedMasterNodesReady: NodeControllerDegraded: The master node(s) "ip-10-0-138-143.us-west-2.compute.internal" not ready
Feb 14 14:08:29.127 E ns/openshift-apiserver pod/apiserver-k79dd node/ip-10-0-138-143.us-west-2.compute.internal container=openshift-apiserver container exited with code 1 (Error): "\nI0214 14:05:47.987876       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0214 14:05:47.994646       1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick\nI0214 14:05:47.994663       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0214 14:05:47.994698       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0214 14:05:47.994769       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0214 14:05:48.008351       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 172.30.61.58:2379: connect: connection refused". Reconnecting...\nW0214 14:05:48.008411       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 172.30.61.58:2379: connect: connection refused". Reconnecting...\nW0214 14:05:48.008536       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 172.30.61.58:2379: connect: connection refused". Reconnecting...\nW0214 14:05:48.008688       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://etcd.openshift-etcd.svc:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 172.30.61.58:2379: connect: connection refused". Reconnecting...\nE0214 14:05:48.039112       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.147332       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\n
Feb 14 14:08:29.152 E ns/openshift-monitoring pod/node-exporter-zzbk7 node/ip-10-0-138-143.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 2-14T13:48:53Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:53Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 14 14:08:29.177 E ns/openshift-sdn pod/sdn-controller-f2phf node/ip-10-0-138-143.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0214 13:51:23.332602       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0214 14:03:18.683731       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"fdedfd2d-482e-42c0-951d-0de984cd6e0e", ResourceVersion:"35852", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717283266, loc:(*time.Location)(0x2b77ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-138-143\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-14T14:03:18Z\",\"renewTime\":\"2020-02-14T14:03:18Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-138-143 became leader'\nI0214 14:03:18.683871       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0214 14:03:18.689231       1 master.go:51] Initializing SDN master\nI0214 14:03:18.706121       1 network_controller.go:60] Started OpenShift Network Controller\n
Feb 14 14:08:29.209 E ns/openshift-controller-manager pod/controller-manager-q4lvq node/ip-10-0-138-143.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 14 14:08:29.231 E ns/openshift-multus pod/multus-admission-controller-j9pw4 node/ip-10-0-138-143.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 14 14:08:29.242 E ns/openshift-multus pod/multus-vd99m node/ip-10-0-138-143.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 14 14:08:29.255 E ns/openshift-machine-config-operator pod/machine-config-daemon-7fl8q node/ip-10-0-138-143.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 14:08:29.276 E ns/openshift-machine-config-operator pod/machine-config-server-9fclw node/ip-10-0-138-143.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0214 14:01:46.623588       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0214 14:01:46.624359       1 api.go:51] Launching server on :22624\nI0214 14:01:46.624413       1 api.go:51] Launching server on :22623\n
Feb 14 14:08:29.286 E ns/openshift-cluster-node-tuning-operator pod/tuned-k7xvf node/ip-10-0-138-143.us-west-2.compute.internal container=tuned container exited with code 143 (Error):  true\nI0214 14:05:33.055208     445 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:05:33.056508     445 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:05:33.161412     445 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 14:05:33.161895     445 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-6-ip-10-0-138-143.us-west-2.compute.internal) labels changed node wide: false\nI0214 14:05:33.352992     445 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-5-ip-10-0-138-143.us-west-2.compute.internal) labels changed node wide: false\nI0214 14:05:33.743353     445 openshift-tuned.go:550] Pod (openshift-kube-apiserver/revision-pruner-6-ip-10-0-138-143.us-west-2.compute.internal) labels changed node wide: true\nI0214 14:05:38.055207     445 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:05:38.056453     445 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:05:38.158756     445 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 14:05:45.491583     445 openshift-tuned.go:550] Pod (openshift-kube-scheduler/revision-pruner-6-ip-10-0-138-143.us-west-2.compute.internal) labels changed node wide: true\nI0214 14:05:48.056683     445 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:05:48.058437     445 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:05:48.212797     445 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 14:05:48.212907     445 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-138-143.us-west-2.compute.internal) labels changed node wide: true\nI0214 14:05:48.313198     445 openshift-tuned.go:137] Received signal: terminated\n
Feb 14 14:08:29.340 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-143.us-west-2.compute.internal node/ip-10-0-138-143.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): red revision has been compacted\nE0214 14:05:48.046486       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.046503       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.046544       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.046564       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.046679       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.046973       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.046999       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.047019       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.047046       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.047096       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.047122       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.047705       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:05:48.047737       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0214 14:05:48.360445       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\nI0214 14:05:48.360449       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-138-143.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\n
Feb 14 14:08:29.340 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-143.us-west-2.compute.internal node/ip-10-0-138-143.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0214 13:46:27.997390       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 14 14:08:29.340 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-138-143.us-west-2.compute.internal node/ip-10-0-138-143.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0214 13:56:31.626818       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 13:56:31.627097       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0214 13:56:31.832336       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 13:56:31.832585       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 14 14:08:29.362 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-143.us-west-2.compute.internal node/ip-10-0-138-143.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): I0214 13:47:55.605884       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0214 13:47:55.608459       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0214 13:47:55.609440       1 leaderelection.go:217] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Feb 14 14:08:29.362 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-143.us-west-2.compute.internal node/ip-10-0-138-143.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:04:36.484486       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:04:36.484870       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:04:46.491931       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:04:46.492167       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:04:56.501426       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:04:56.501805       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:05:06.508980       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:05:06.509810       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:05:16.516731       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:05:16.517099       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:05:26.525096       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:05:26.525388       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:05:36.532905       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:05:36.533134       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:05:46.540116       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:05:46.540799       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Feb 14 14:08:29.362 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-138-143.us-west-2.compute.internal node/ip-10-0-138-143.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error): 7:51.432883675 +0000 UTC))\nI0214 13:47:51.432921       1 tlsconfig.go:179] loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-02-14 13:05:11 +0000 UTC to 2020-02-15 13:05:11 +0000 UTC (now=2020-02-14 13:47:51.432910557 +0000 UTC))\nI0214 13:47:51.433117       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1581686677" (2020-02-14 13:24:53 +0000 UTC to 2022-02-13 13:24:54 +0000 UTC (now=2020-02-14 13:47:51.433108115 +0000 UTC))\nI0214 13:47:51.433286       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1581688071" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1581688071" (2020-02-14 12:47:51 +0000 UTC to 2021-02-13 12:47:51 +0000 UTC (now=2020-02-14 13:47:51.433276267 +0000 UTC))\nI0214 13:47:51.433350       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1581688071" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1581688071" (2020-02-14 12:47:51 +0000 UTC to 2021-02-13 12:47:51 +0000 UTC (now=2020-02-14 13:47:51.433342917 +0000 UTC))\nI0214 13:47:51.433379       1 secure_serving.go:178] Serving securely on [::]:10257\nI0214 13:47:51.433407       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0214 13:47:51.433443       1 tlsconfig.go:241] Starting DynamicServingCertificateController\n
Feb 14 14:08:29.371 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-138-143.us-west-2.compute.internal node/ip-10-0-138-143.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): >|Memory<15331892Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0214 14:05:33.997924       1 scheduler.go:667] pod openshift-service-ca/service-serving-cert-signer-c4ff79575-7msnp is bound successfully on node "ip-10-0-140-160.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<15946292Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15331892Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0214 14:05:34.577888       1 scheduler.go:667] pod openshift-dns-operator/dns-operator-675df86b5b-ldmn5 is bound successfully on node "ip-10-0-153-26.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<15946308Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15331908Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0214 14:05:34.615858       1 scheduler.go:667] pod openshift-operator-lifecycle-manager/packageserver-6c859f4b7c-prpm5 is bound successfully on node "ip-10-0-140-160.us-west-2.compute.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<15946292Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15331892Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0214 14:05:35.424651       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-mmwns: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0214 14:05:40.425591       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-mmwns: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Feb 14 14:08:35.235 E ns/openshift-multus pod/multus-vd99m node/ip-10-0-138-143.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 14 14:08:41.358 E ns/openshift-machine-config-operator pod/machine-config-daemon-7fl8q node/ip-10-0-138-143.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 14 14:09:18.222 E ns/openshift-insights pod/insights-operator-744996cb69-8ftdc node/ip-10-0-140-160.us-west-2.compute.internal container=operator container exited with code 2 (Error): ET /metrics: (8.195285ms) 200 [Prometheus/2.14.0 10.131.0.18:56288]\nI0214 14:06:00.146062       1 httplog.go:90] GET /metrics: (7.578234ms) 200 [Prometheus/2.14.0 10.128.2.19:44308]\nI0214 14:06:17.291409       1 status.go:298] The operator is healthy\nI0214 14:06:17.291460       1 status.go:373] No status update necessary, objects are identical\nI0214 14:06:27.955398       1 httplog.go:90] GET /metrics: (5.224803ms) 200 [Prometheus/2.14.0 10.131.0.18:56288]\nI0214 14:06:30.139742       1 httplog.go:90] GET /metrics: (1.277088ms) 200 [Prometheus/2.14.0 10.128.2.19:44308]\nI0214 14:06:57.957647       1 httplog.go:90] GET /metrics: (4.776335ms) 200 [Prometheus/2.14.0 10.131.0.18:56288]\nI0214 14:07:00.140475       1 httplog.go:90] GET /metrics: (1.980419ms) 200 [Prometheus/2.14.0 10.128.2.19:44308]\nI0214 14:07:17.292464       1 configobserver.go:65] Refreshing configuration from cluster pull secret\nI0214 14:07:17.296980       1 configobserver.go:90] Found cloud.openshift.com token\nI0214 14:07:17.296999       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0214 14:07:27.958863       1 httplog.go:90] GET /metrics: (4.726649ms) 200 [Prometheus/2.14.0 10.131.0.18:56288]\nI0214 14:07:30.140171       1 httplog.go:90] GET /metrics: (1.683161ms) 200 [Prometheus/2.14.0 10.128.2.19:44308]\nI0214 14:07:57.959039       1 httplog.go:90] GET /metrics: (4.863276ms) 200 [Prometheus/2.14.0 10.131.0.18:56288]\nI0214 14:08:17.292405       1 status.go:298] The operator is healthy\nI0214 14:08:17.292447       1 status.go:373] No status update necessary, objects are identical\nI0214 14:08:27.959271       1 httplog.go:90] GET /metrics: (4.992647ms) 200 [Prometheus/2.14.0 10.131.0.18:56288]\nI0214 14:08:30.145457       1 httplog.go:90] GET /metrics: (1.921613ms) 200 [Prometheus/2.14.0 10.129.2.15:36218]\nI0214 14:08:57.962532       1 httplog.go:90] GET /metrics: (8.500109ms) 200 [Prometheus/2.14.0 10.131.0.18:56288]\nI0214 14:09:00.137311       1 httplog.go:90] GET /metrics: (1.459721ms) 200 [Prometheus/2.14.0 10.129.2.15:36218]\n
Feb 14 14:09:19.871 E ns/openshift-authentication-operator pod/authentication-operator-745fdbddb7-b6xvh node/ip-10-0-140-160.us-west-2.compute.internal container=operator container exited with code 255 (Error): ll deployment replicas are ready","reason":"ProgressingOAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-02-14T13:37:49Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-14T13:29:16Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0214 14:07:36.554986       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"be4064e6-d4f4-4ff8-81a8-380c28bd4b4a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout"\nI0214 14:07:38.177564       1 status_controller.go:166] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-14T13:37:49Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-14T14:07:38Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-14T13:37:49Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-14T13:29:16Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0214 14:07:38.186509       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"be4064e6-d4f4-4ff8-81a8-380c28bd4b4a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout" to "",Progressing changed from True to False ("")\nI0214 14:09:17.406221       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0214 14:09:17.406318       1 leaderelection.go:66] leaderelection lost\n
Feb 14 14:09:20.827 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Feb 14 14:09:21.182 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-6858ccdc98-mft8m node/ip-10-0-140-160.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ffinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\\n\"\nNodeControllerDegraded: The master node(s) \"ip-10-0-138-143.us-west-2.compute.internal\" not ready" to "NodeControllerDegraded: The master node(s) \"ip-10-0-138-143.us-west-2.compute.internal\" not ready"\nI0214 14:08:38.888563       1 status_controller.go:165] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-14T14:08:38Z","message":"StaticPodsDegraded: nodes/ip-10-0-138-143.us-west-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-138-143.us-west-2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master node(s) are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-14T13:47:55Z","message":"Progressing: 3 nodes are at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-14T13:27:07Z","message":"Available: 3 nodes are active; 3 nodes are at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-14T13:24:34Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0214 14:08:38.914905       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"cf597bd9-ba55-406e-bbfd-67c2a573d116", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("StaticPodsDegraded: nodes/ip-10-0-138-143.us-west-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-138-143.us-west-2.compute.internal container=\"scheduler\" is not ready\nNodeControllerDegraded: All master node(s) are ready")\nI0214 14:09:18.468974       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0214 14:09:18.469359       1 builder.go:217] server exited\n
Feb 14 14:09:22.537 E ns/openshift-console-operator pod/console-operator-5d479d8cb-qsldv node/ip-10-0-140-160.us-west-2.compute.internal container=console-operator container exited with code 255 (Error): 4T14:05:30Z","message":"DeploymentAvailable: 1 replicas ready at version 0.0.1-2020-02-14-130427","reason":"DeploymentAvailableFailedUpdate","status":"False","type":"Available"},{"lastTransitionTime":"2020-02-14T13:29:02Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0214 14:06:33.137749       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"0787ea57-8577-4969-89d7-52dc5a6e843d", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "" to "OAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)"\nI0214 14:06:33.215854       1 status_controller.go:165] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-14T13:29:02Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-14T13:51:06Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-14T14:06:33Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-14T13:29:02Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0214 14:06:33.225694       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"0787ea57-8577-4969-89d7-52dc5a6e843d", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "OAuthClientSyncDegraded: the server is currently unable to handle the request (get oauthclients.oauth.openshift.io console)" to "",Available changed from False to True ("")\nI0214 14:09:21.366890       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0214 14:09:21.366952       1 leaderelection.go:66] leaderelection lost\n
Feb 14 14:09:23.438 E ns/openshift-console pod/console-cb445dd98-zj784 node/ip-10-0-140-160.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020/02/14 14:02:05 cmd/main: cookies are secure!\n2020/02/14 14:02:05 cmd/main: Binding to [::]:8443...\n2020/02/14 14:02:05 cmd/main: using TLS\n
Feb 14 14:09:24.808 E ns/openshift-monitoring pod/prometheus-operator-6c8db4bf84-s6gnn node/ip-10-0-140-160.us-west-2.compute.internal container=prometheus-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:09:25.835 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-df46cf599-pfld6 node/ip-10-0-140-160.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:09:26.880 E ns/openshift-console pod/downloads-57dc665556-s487k node/ip-10-0-140-160.us-west-2.compute.internal container=download-server container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:09:27.298 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-689866f95c-htp64 node/ip-10-0-140-160.us-west-2.compute.internal container=operator container exited with code 255 (Error): s/openshift-controller-manager/roles/prometheus-k8s\nI0214 14:08:34.084148       1 request.go:538] Throttling request took 196.430117ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0214 14:08:40.674911       1 httplog.go:90] GET /metrics: (5.445164ms) 200 [Prometheus/2.14.0 10.131.0.18:42520]\nI0214 14:08:53.884717       1 request.go:538] Throttling request took 151.908921ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0214 14:08:54.084663       1 request.go:538] Throttling request took 196.489075ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0214 14:09:00.296710       1 httplog.go:90] GET /metrics: (5.743858ms) 200 [Prometheus/2.14.0 10.129.2.15:46480]\nI0214 14:09:10.674684       1 httplog.go:90] GET /metrics: (5.61677ms) 200 [Prometheus/2.14.0 10.131.0.18:42520]\nI0214 14:09:13.883837       1 request.go:538] Throttling request took 164.812215ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0214 14:09:14.083841       1 request.go:538] Throttling request took 196.608704ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0214 14:09:23.718957       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 39 items received\nW0214 14:09:23.827901       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 40719 (40921)\nI0214 14:09:24.170397       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0214 14:09:24.171368       1 builder.go:217] server exited\n
Feb 14 14:09:27.452 E ns/openshift-operator-lifecycle-manager pod/packageserver-6c859f4b7c-prpm5 node/ip-10-0-140-160.us-west-2.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:09:28.112 E ns/openshift-machine-config-operator pod/etcd-quorum-guard-6867d6d8d6-6hpcj node/ip-10-0-140-160.us-west-2.compute.internal container=guard container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 14 14:09:39.405 E kube-apiserver failed contacting the API: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=41410&timeout=7m31s&timeoutSeconds=451&watch=true: dial tcp 35.160.238.20:6443: connect: connection refused
Feb 14 14:09:39.412 E kube-apiserver failed contacting the API: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=41377&timeout=8m57s&timeoutSeconds=537&watch=true: dial tcp 35.160.238.20:6443: connect: connection refused
Feb 14 14:09:46.085 E kube-apiserver Kube API started failing: Get https://api.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 14 14:09:46.317 E clusteroperator/network changed Degraded to True: ApplyOperatorConfig: Error while updating operator configuration: could not apply (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-sdn: could not update object (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /openshift-sdn: Put https://api-int.ci-op-np7iik50-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-sdn: unexpected EOF
Feb 14 14:09:48.318 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-6df75b75bc-msf9m node/ip-10-0-138-143.us-west-2.compute.internal container=cluster-node-tuning-operator container exited with code 255 (Error): I0214 14:09:47.454097       1 main.go:27] Go Version: go1.12.9\nI0214 14:09:47.454178       1 main.go:28] Go OS/Arch: linux/amd64\nI0214 14:09:47.454185       1 main.go:29] node-tuning Version: 769ba5c-dirty\nI0214 14:09:47.454194       1 main.go:45] Operator namespace: openshift-cluster-node-tuning-operator\nI0214 14:09:47.454734       1 leader.go:46] Trying to become the leader.\nF0214 14:09:47.457342       1 main.go:58] Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 14 14:09:54.087 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 14 14:10:12.353 E ns/openshift-monitoring pod/node-exporter-4sjkh node/ip-10-0-139-242.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 2-14T13:49:20Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-14T13:49:20Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 14 14:10:13.235 E ns/openshift-multus pod/multus-hkl74 node/ip-10-0-139-242.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 14 14:10:13.322 E ns/openshift-sdn pod/ovs-tvmfl node/ip-10-0-139-242.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): T14:07:45.100Z|00154|connmgr|INFO|br0<->unix#802: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:07:45.132Z|00155|bridge|INFO|bridge br0: deleted interface veth93d10437 on port 11\n2020-02-14T14:07:45.175Z|00156|connmgr|INFO|br0<->unix#806: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:07:45.213Z|00157|connmgr|INFO|br0<->unix#810: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:07:45.240Z|00158|bridge|INFO|bridge br0: deleted interface veth890b38ee on port 7\n2020-02-14T14:07:45.473Z|00022|jsonrpc|WARN|Dropped 7 log messages in last 900 seconds (most recently, 900 seconds ago) due to excessive rate\n2020-02-14T14:07:45.473Z|00023|jsonrpc|WARN|unix#735: receive error: Connection reset by peer\n2020-02-14T14:07:45.473Z|00024|reconnect|WARN|unix#735: connection dropped (Connection reset by peer)\n2020-02-14T14:07:45.284Z|00159|connmgr|INFO|br0<->unix#813: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:07:45.355Z|00160|connmgr|INFO|br0<->unix#816: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:07:45.383Z|00161|bridge|INFO|bridge br0: deleted interface veth7159af57 on port 17\n2020-02-14T14:07:45.419Z|00162|connmgr|INFO|br0<->unix#819: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:07:45.453Z|00163|connmgr|INFO|br0<->unix#822: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:07:45.480Z|00164|bridge|INFO|bridge br0: deleted interface veth005e2291 on port 3\n2020-02-14T14:07:45.532Z|00165|connmgr|INFO|br0<->unix#825: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:07:45.574Z|00166|connmgr|INFO|br0<->unix#828: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:07:45.609Z|00167|bridge|INFO|bridge br0: deleted interface vethe992620a on port 9\n2020-02-14T14:08:14.474Z|00168|connmgr|INFO|br0<->unix#850: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:08:14.501Z|00169|connmgr|INFO|br0<->unix#853: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:08:14.522Z|00170|bridge|INFO|bridge br0: deleted interface vethb7455cb4 on port 13\nExiting ovs-vswitchd (78283).\nTerminated\n
Feb 14 14:10:13.410 E ns/openshift-machine-config-operator pod/machine-config-daemon-xsp8p node/ip-10-0-139-242.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 14:10:13.410 E ns/openshift-cluster-node-tuning-operator pod/tuned-6b2s5 node/ip-10-0-139-242.us-west-2.compute.internal container=tuned container exited with code 143 (Error): 079 INFO     tuned.daemon.controller: starting controller\n2020-02-14 14:02:53,079 INFO     tuned.daemon.daemon: starting tuning\n2020-02-14 14:02:53,085 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-02-14 14:02:53,086 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-02-14 14:02:53,090 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-02-14 14:02:53,092 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-02-14 14:02:53,094 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-02-14 14:02:53,259 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-02-14 14:02:53,260 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0214 14:05:48.609819  102720 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0214 14:05:48.611274  102720 openshift-tuned.go:881] Pod event watch channel closed.\nI0214 14:05:48.611301  102720 openshift-tuned.go:883] Increasing resyncPeriod to 128\nI0214 14:07:56.611505  102720 openshift-tuned.go:209] Extracting tuned profiles\nI0214 14:07:56.613532  102720 openshift-tuned.go:739] Resync period to pull node/pod labels: 128 [s]\nI0214 14:07:56.623820  102720 openshift-tuned.go:550] Pod (openshift-sdn/ovs-tvmfl) labels changed node wide: true\nI0214 14:08:01.621988  102720 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:08:01.623429  102720 openshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0214 14:08:01.624518  102720 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:08:01.737237  102720 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0214 14:08:24.662219  102720 openshift-tuned.go:550] Pod (e2e-k8s-service-upgrade-869/service-test-r86k5) labels changed node wide: true\n
Feb 14 14:10:21.714 E ns/openshift-machine-config-operator pod/machine-config-daemon-xsp8p node/ip-10-0-139-242.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 14 14:11:48.445 E ns/openshift-monitoring pod/node-exporter-76vwj node/ip-10-0-140-160.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): 2-14T13:48:09Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-14T13:48:09Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 14 14:11:48.459 E ns/openshift-sdn pod/ovs-gsmkc node/ip-10-0-140-160.us-west-2.compute.internal container=openvswitch container exited with code 143 (Error): in the last 0 s (4 deletes)\n2020-02-14T14:09:25.276Z|00355|bridge|INFO|bridge br0: deleted interface vethbd85e619 on port 30\n2020-02-14T14:09:25.348Z|00356|connmgr|INFO|br0<->unix#1325: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:09:25.491Z|00357|connmgr|INFO|br0<->unix#1328: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:09:25.538Z|00358|bridge|INFO|bridge br0: deleted interface vethe82a2a8b on port 47\n2020-02-14T14:09:25.611Z|00359|connmgr|INFO|br0<->unix#1331: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:09:25.680Z|00360|connmgr|INFO|br0<->unix#1334: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:09:25.805Z|00361|bridge|INFO|bridge br0: deleted interface vethc909ff52 on port 6\n2020-02-14T14:09:26.350Z|00362|connmgr|INFO|br0<->unix#1337: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:09:26.406Z|00363|connmgr|INFO|br0<->unix#1340: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:09:26.480Z|00364|bridge|INFO|bridge br0: deleted interface vethc49bd346 on port 4\n2020-02-14T14:09:26.530Z|00365|connmgr|INFO|br0<->unix#1343: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:09:26.585Z|00366|connmgr|INFO|br0<->unix#1346: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:09:26.673Z|00367|bridge|INFO|bridge br0: deleted interface veth21cf781c on port 35\n2020-02-14T14:09:26.889Z|00368|connmgr|INFO|br0<->unix#1349: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:09:26.963Z|00369|connmgr|INFO|br0<->unix#1352: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:09:27.009Z|00370|bridge|INFO|bridge br0: deleted interface vethde80a4fb on port 45\n2020-02-14T14:09:26.459Z|00036|reconnect|WARN|unix#1142: connection dropped (Broken pipe)\n2020-02-14T14:09:27.243Z|00371|connmgr|INFO|br0<->unix#1355: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-14T14:09:27.334Z|00372|connmgr|INFO|br0<->unix#1358: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-14T14:09:27.453Z|00373|bridge|INFO|bridge br0: deleted interface veth590b2866 on port 46\nExiting ovs-vswitchd (75871).\nTerminated\n
Feb 14 14:11:48.517 E ns/openshift-machine-config-operator pod/machine-config-server-zsnsk node/ip-10-0-140-160.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0214 14:01:55.153299       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0214 14:01:55.154716       1 api.go:51] Launching server on :22624\nI0214 14:01:55.154861       1 api.go:51] Launching server on :22623\n
Feb 14 14:11:48.532 E ns/openshift-multus pod/multus-admission-controller-dkgfh node/ip-10-0-140-160.us-west-2.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 14 14:11:48.548 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-160.us-west-2.compute.internal node/ip-10-0-140-160.us-west-2.compute.internal container=kube-apiserver-7 container exited with code 1 (Error): as been compacted\nE0214 14:09:38.656263       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:09:38.656353       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:09:38.656279       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0214 14:09:38.733211       1 available_controller.go:427] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0214 14:09:38.779420       1 available_controller.go:427] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0214 14:09:38.791756       1 available_controller.go:427] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0214 14:09:38.801649       1 available_controller.go:427] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0214 14:09:38.842544       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-140-160.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0214 14:09:38.842690       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Feb 14 14:11:48.548 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-160.us-west-2.compute.internal node/ip-10-0-140-160.us-west-2.compute.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0214 13:44:32.227065       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 14 14:11:48.548 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-140-160.us-west-2.compute.internal node/ip-10-0-140-160.us-west-2.compute.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0214 14:04:36.101152       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:04:36.101434       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0214 14:04:36.306919       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:04:36.307173       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 14 14:11:48.548 E ns/openshift-machine-config-operator pod/machine-config-daemon-mvxnq node/ip-10-0-140-160.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 14 14:11:48.548 E ns/openshift-controller-manager pod/controller-manager-bvr9k node/ip-10-0-140-160.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Feb 14 14:11:48.564 E ns/openshift-sdn pod/sdn-controller-bnkcm node/ip-10-0-140-160.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0214 13:51:34.599242       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 14 14:11:48.605 E ns/openshift-multus pod/multus-gx9wp node/ip-10-0-140-160.us-west-2.compute.internal container=kube-multus container exited with code 143 (Error): 
Feb 14 14:11:48.606 E ns/openshift-cluster-node-tuning-operator pod/tuned-2b4sn node/ip-10-0-140-160.us-west-2.compute.internal container=tuned container exited with code 143 (Error):  Pod (openshift-kube-scheduler/revision-pruner-6-ip-10-0-140-160.us-west-2.compute.internal) labels changed node wide: false\nI0214 14:09:23.653267  101086 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-3-ip-10-0-140-160.us-west-2.compute.internal) labels changed node wide: false\nI0214 14:09:23.710234  101086 openshift-tuned.go:550] Pod (openshift-authentication-operator/authentication-operator-745fdbddb7-b6xvh) labels changed node wide: true\nI0214 14:09:25.635410  101086 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:09:25.637208  101086 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:09:25.903892  101086 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 14:09:25.904003  101086 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-4-ip-10-0-140-160.us-west-2.compute.internal) labels changed node wide: true\nI0214 14:09:30.635393  101086 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:09:30.636854  101086 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:09:30.793709  101086 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 14:09:33.562891  101086 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/olm-operator-7f48bb9bc4-k2nb9) labels changed node wide: true\nI0214 14:09:35.636088  101086 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0214 14:09:35.639524  101086 openshift-tuned.go:441] Getting recommended profile...\nI0214 14:09:35.745265  101086 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0214 14:09:38.602043  101086 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-140-160.us-west-2.compute.internal) labels changed node wide: true\n
Feb 14 14:11:48.606 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-140-160.us-west-2.compute.internal node/ip-10-0-140-160.us-west-2.compute.internal container=scheduler container exited with code 2 (Error): und node resource: "Capacity: CPU<4>|Memory<15946308Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15331908Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0214 14:09:26.389520       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-g2dc8: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nE0214 14:09:26.428742       1 factory.go:585] pod is already present in the activeQ\nI0214 14:09:26.434191       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-g2dc8: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0214 14:09:28.138378       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-g2dc8: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0214 14:09:33.570838       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-g2dc8: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0214 14:09:38.140216       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6867d6d8d6-g2dc8: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Feb 14 14:11:48.636 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-160.us-west-2.compute.internal node/ip-10-0-140-160.us-west-2.compute.internal container=cluster-policy-controller-6 container exited with code 1 (Error): o old.\nW0214 14:05:49.064873       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.NetworkPolicy ended with: too old resource version: 19796 (38274)\nW0214 14:05:49.095894       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Namespace ended with: too old resource version: 19790 (38274)\nW0214 14:05:49.096145       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 19790 (38274)\nW0214 14:05:49.096261       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1beta1.Ingress ended with: too old resource version: 19794 (38274)\nW0214 14:06:03.031740       1 reflector.go:289] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: The resourceVersion for the provided watch is too old.\nW0214 14:06:17.358964       1 reflector.go:289] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: The resourceVersion for the provided watch is too old.\nE0214 14:06:19.097357       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nE0214 14:06:38.016432       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to watch *v1.Route: the server is currently unable to handle the request (get routes.route.openshift.io)\nW0214 14:09:28.022166       1 reflector.go:289] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: The resourceVersion for the provided watch is too old.\nW0214 14:09:38.890490       1 reflector.go:289] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: The resourceVersion for the provided watch is too old.\n
Feb 14 14:11:48.636 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-160.us-west-2.compute.internal node/ip-10-0-140-160.us-west-2.compute.internal container=kube-controller-manager-cert-syncer-6 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:08:26.835871       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:08:26.836245       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:08:36.847085       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:08:36.847373       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:08:46.855124       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:08:46.855745       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:08:56.861965       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:08:56.862242       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:09:06.874957       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:09:06.875645       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:09:16.907359       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:09:16.909279       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:09:27.044986       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:09:27.045293       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0214 14:09:37.056776       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0214 14:09:37.057018       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Feb 14 14:11:48.636 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-140-160.us-west-2.compute.internal node/ip-10-0-140-160.us-west-2.compute.internal container=kube-controller-manager-6 container exited with code 2 (Error): ect has been modified; please apply your changes to the latest version and try again\nI0214 14:09:38.627553       1 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openshift-etcd", Name:"etcd", UID:"901ae53e-edee-400f-812c-6a2fdf81ab65", APIVersion:"v1", ResourceVersion:"41389", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0214 14:09:38.635789       1 endpoints_controller.go:340] Error syncing endpoints for service "openshift-etcd/etcd", retrying. Error: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0214 14:09:38.635816       1 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openshift-etcd", Name:"etcd", UID:"901ae53e-edee-400f-812c-6a2fdf81ab65", APIVersion:"v1", ResourceVersion:"41389", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0214 14:09:38.647517       1 endpoints_controller.go:340] Error syncing endpoints for service "openshift-etcd/etcd", retrying. Error: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\nI0214 14:09:38.647641       1 event.go:255] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openshift-etcd", Name:"etcd", UID:"901ae53e-edee-400f-812c-6a2fdf81ab65", APIVersion:"v1", ResourceVersion:"41389", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again\n
Feb 14 14:11:54.087 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 14 14:11:55.393 E ns/openshift-multus pod/multus-gx9wp node/ip-10-0-140-160.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Feb 14 14:12:00.405 E ns/openshift-machine-config-operator pod/machine-config-daemon-mvxnq node/ip-10-0-140-160.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 14 14:12:20.694 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusK8SFailed: Failed to rollout the stack. Error: running task Updating Prometheus-k8s failed: reconciling Prometheus ClusterRole failed: updating ClusterRole object failed: Put https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/prometheus-k8s: read tcp 10.129.0.21:60980->172.30.0.1:443: read: connection reset by peer
Feb 14 14:12:53.586 E ns/openshift-marketplace pod/certified-operators-8597b4d948-hx5tx node/ip-10-0-138-12.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error):