ResultSUCCESS
Tests 3 failed / 22 succeeded
Started2020-04-08 13:50
Elapsed1h26m
Work namespaceci-op-mxwilb0y
Refs openshift-4.5:fe90dcbe
44:8b80929a
podcc92759d-799f-11ea-bb46-0a58ac1030d8
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 36m22s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 2s of 33m40s (0%):

Apr 08 14:58:31.208 E ns/e2e-k8s-service-lb-available-17 svc/service-test Service stopped responding to GET requests on reused connections
Apr 08 14:58:32.208 E ns/e2e-k8s-service-lb-available-17 svc/service-test Service is not responding to GET requests on reused connections
Apr 08 14:58:32.515 I ns/e2e-k8s-service-lb-available-17 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1586358413.xml

Filter through log files


Cluster upgrade Kubernetes and OpenShift APIs remain available 35m51s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 11s of 35m50s (1%):

Apr 08 15:01:42.401 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-mxwilb0y-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 34.212.217.72:6443: connect: connection refused
Apr 08 15:01:42.401 E kube-apiserver Kube API started failing: Get https://api.ci-op-mxwilb0y-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 34.212.217.72:6443: connect: connection refused
Apr 08 15:01:43.049 - 5s    E kube-apiserver Kube API is not responding to GET requests
Apr 08 15:01:43.049 - 5s    E openshift-apiserver OpenShift API is not responding to GET requests
Apr 08 15:01:48.695 I openshift-apiserver OpenShift API started responding to GET requests
Apr 08 15:01:48.702 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1586358413.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 36m26s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
186 error level events were detected during this test run:

Apr 08 14:36:43.807 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/kube-controller-manager container exited with code 255 (Error): t: connection refused\nE0408 14:36:28.513271       1 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/console.openshift.io/v1/consolelinks?allowWatchBookmarks=true&resourceVersion=24661&timeout=8m25s&timeoutSeconds=505&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0408 14:36:28.514351       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://localhost:6443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=22960&timeout=5m53s&timeoutSeconds=353&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0408 14:36:28.515441       1 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machine.openshift.io/v1beta1/machines?allowWatchBookmarks=true&resourceVersion=24639&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp [::1]:6443: connect: connection refused\nW0408 14:36:29.024026       1 garbagecollector.go:646] failed to discover preferred resources: Get https://localhost:6443/api?timeout=32s: dial tcp [::1]:6443: connect: connection refused\nI0408 14:36:29.024049       1 garbagecollector.go:175] no resources reported by discovery, skipping garbage collector sync\nE0408 14:36:29.045437       1 resource_quota_controller.go:408] failed to discover resources: Get https://localhost:6443/api?timeout=32s: dial tcp [::1]:6443: connect: connection refused\nI0408 14:36:29.069676       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nI0408 14:36:29.069817       1 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-131-146_fea55933-1fd0-47ed-8eac-7502bcaa75b5 stopped leading\nF0408 14:36:29.069884       1 controllermanager.go:291] leaderelection lost\n
Apr 08 14:36:50.529 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: ip-10-0-131-146.us-west-2.compute.internal members are unhealthy,  members are unknown
Apr 08 14:37:11.354 E ns/openshift-machine-api pod/machine-api-controllers-5c8d894477-gb72p node/ip-10-0-145-206.us-west-2.compute.internal container/machineset-controller container exited with code 1 (Error): 
Apr 08 14:37:30.559 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Apr 08 14:38:12.394 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-857749454d-b7hdj node/ip-10-0-131-146.us-west-2.compute.internal container/kube-storage-version-migrator-operator container exited with code 255 (Error): e":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-04-08T14:13:03Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-04-08-135115"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0408 14:23:54.980141       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"ee12cfb0-4b69-4869-8cfe-40ce66b5e7c0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0408 14:23:55.003390       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"ee12cfb0-4b69-4869-8cfe-40ce66b5e7c0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0408 14:38:01.854652       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0408 14:38:01.854697       1 leaderelection.go:66] leaderelection lost\n
Apr 08 14:39:50.766 E ns/openshift-cluster-machine-approver pod/machine-approver-5c999d6d9f-7r6x6 node/ip-10-0-131-146.us-west-2.compute.internal container/machine-approver-controller container exited with code 2 (Error): eta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=22385&timeoutSeconds=568&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 14:36:38.380864       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=22385&timeoutSeconds=329&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 14:36:39.381294       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=22385&timeoutSeconds=593&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 14:36:40.381756       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=22385&timeoutSeconds=322&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 14:36:41.382161       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=22385&timeoutSeconds=366&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0408 14:36:42.382647       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:240: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=22385&timeoutSeconds=504&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
Apr 08 14:39:53.656 E ns/openshift-kube-storage-version-migrator pod/migrator-585566844f-sl5cf node/ip-10-0-136-206.us-west-2.compute.internal container/migrator container exited with code 2 (Error): 
Apr 08 14:39:59.168 E ns/openshift-insights pod/insights-operator-564bfd8ff8-trnqq node/ip-10-0-129-31.us-west-2.compute.internal container/operator container exited with code 2 (Error): og.go:90] GET /metrics: (5.36285ms) 200 [Prometheus/2.15.2 10.129.2.13:47624]\nI0408 14:38:07.120413       1 httplog.go:90] GET /metrics: (1.57237ms) 200 [Prometheus/2.15.2 10.128.2.12:40922]\nI0408 14:38:31.804760       1 httplog.go:90] GET /metrics: (5.388381ms) 200 [Prometheus/2.15.2 10.129.2.13:47624]\nI0408 14:38:37.120310       1 httplog.go:90] GET /metrics: (1.444657ms) 200 [Prometheus/2.15.2 10.128.2.12:40922]\nI0408 14:38:50.437119       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0408 14:38:50.437527       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 0 items received\nI0408 14:38:50.825857       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 24897 (26289)\nI0408 14:38:51.017278       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: watch of *v1.ConfigMap ended with: too old resource version: 25507 (26289)\nI0408 14:38:51.826158       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0408 14:38:52.017448       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0408 14:39:01.806437       1 httplog.go:90] GET /metrics: (6.816565ms) 200 [Prometheus/2.15.2 10.129.2.13:47624]\nI0408 14:39:07.120767       1 httplog.go:90] GET /metrics: (1.896019ms) 200 [Prometheus/2.15.2 10.128.2.12:40922]\nI0408 14:39:12.567342       1 status.go:298] The operator is healthy\nI0408 14:39:31.806430       1 httplog.go:90] GET /metrics: (6.887951ms) 200 [Prometheus/2.15.2 10.129.2.13:47624]\nI0408 14:39:37.120485       1 httplog.go:90] GET /metrics: (1.518705ms) 200 [Prometheus/2.15.2 10.128.2.12:40922]\n
Apr 08 14:40:40.376 E ns/openshift-controller-manager pod/controller-manager-mjc2r node/ip-10-0-129-31.us-west-2.compute.internal container/controller-manager container exited with code 137 (Error): I0408 14:19:34.032345       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0408 14:19:34.033606       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-mxwilb0y/stable-initial@sha256:baf34611b723ba5e9b3ead8872fed2c8af700156096054d720d42a057f5f24be"\nI0408 14:19:34.033624       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-mxwilb0y/stable-initial@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0408 14:19:34.033714       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0408 14:19:34.033756       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 08 14:40:48.460 E ns/openshift-controller-manager pod/controller-manager-25764 node/ip-10-0-145-206.us-west-2.compute.internal container/controller-manager container exited with code 137 (Error): er.go:135] Error syncing image stream "openshift/jenkins-agent-nodejs": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "jenkins-agent-nodejs": the object has been modified; please apply your changes to the latest version and try again\nE0408 14:40:32.595999       1 imagestream_controller.go:135] Error syncing image stream "openshift/jenkins-agent-nodejs": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins-agent-nodejs": the image stream was updated from "31220" to "31505"\nE0408 14:40:32.615365       1 imagestream_controller.go:135] Error syncing image stream "openshift/jenkins-agent-nodejs": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins-agent-nodejs": the image stream was updated from "31220" to "31505"\nE0408 14:40:32.647048       1 imagestream_controller.go:135] Error syncing image stream "openshift/jenkins-agent-nodejs": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins-agent-nodejs": the image stream was updated from "31220" to "31505"\nE0408 14:40:32.684843       1 imagestream_controller.go:135] Error syncing image stream "openshift/jenkins-agent-nodejs": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins-agent-nodejs": the image stream was updated from "31220" to "31505"\nE0408 14:40:32.747559       1 imagestream_controller.go:135] Error syncing image stream "openshift/jenkins-agent-nodejs": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins-agent-nodejs": the image stream was updated from "31220" to "31505"\nE0408 14:40:32.759757       1 imagestream_controller.go:135] Error syncing image stream "openshift/jenkins": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins": the image stream was updated from "31224" to "31537"\nE0408 14:40:32.779858       1 imagestream_controller.go:135] Error syncing image stream "openshift/jenkins": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins": the image stream was updated from "31224" to "31537"\n
Apr 08 14:40:48.536 E ns/openshift-monitoring pod/node-exporter-dm2sk node/ip-10-0-145-206.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:18:59Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:18:59Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 14:40:58.725 E ns/openshift-monitoring pod/grafana-ffc4db748-99wh4 node/ip-10-0-154-130.us-west-2.compute.internal container/grafana container exited with code 1 (Error): 
Apr 08 14:40:58.725 E ns/openshift-monitoring pod/grafana-ffc4db748-99wh4 node/ip-10-0-154-130.us-west-2.compute.internal container/grafana-proxy container exited with code 2 (Error): 
Apr 08 14:40:59.747 E ns/openshift-monitoring pod/thanos-querier-5b4d55469b-rrdfd node/ip-10-0-154-130.us-west-2.compute.internal container/oauth-proxy container exited with code 2 (Error): 2020/04/08 14:27:31 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 14:27:31 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 14:27:31 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 14:27:31 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 14:27:31 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 14:27:31 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 14:27:31 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 14:27:31 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0408 14:27:31.489470       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 14:27:31 http.go:107: HTTPS: listening on [::]:9091\n
Apr 08 14:40:59.768 E ns/openshift-monitoring pod/node-exporter-hz7qv node/ip-10-0-129-31.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:19:11Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:19:11Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 14:41:06.805 E ns/openshift-console-operator pod/console-operator-66cc88dbb4-zl7tc node/ip-10-0-129-31.us-west-2.compute.internal container/console-operator container exited with code 1 (Error):  reflector *v1.Service (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 14:41:05.690641       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 14:41:05.690686       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 14:41:05.690732       1 reflector.go:181] Stopping reflector *v1.Proxy (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0408 14:41:05.690775       1 reflector.go:181] Stopping reflector *v1.Infrastructure (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0408 14:41:05.690804       1 controller.go:70] Shutting down Console\nI0408 14:41:05.690824       1 base_controller.go:101] Shutting down ResourceSyncController ...\nI0408 14:41:05.690835       1 base_controller.go:101] Shutting down StatusSyncer_console ...\nI0408 14:41:05.690845       1 controller.go:181] shutting down ConsoleRouteSyncController\nI0408 14:41:05.690855       1 controller.go:115] shutting down ConsoleResourceSyncDestinationController\nI0408 14:41:05.690868       1 base_controller.go:101] Shutting down ManagementStateController ...\nI0408 14:41:05.690876       1 controller.go:144] shutting down ConsoleServiceSyncController\nI0408 14:41:05.690890       1 base_controller.go:101] Shutting down UnsupportedConfigOverridesController ...\nI0408 14:41:05.690900       1 base_controller.go:101] Shutting down LoggingSyncer ...\nI0408 14:41:05.693162       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 14:41:05.693186       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0408 14:41:05.693197       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nW0408 14:41:05.693210       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\n
Apr 08 14:41:07.370 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-56.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T14:40:59.461Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T14:40:59.464Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T14:40:59.467Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T14:40:59.468Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T14:40:59.468Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T14:40:59.468Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T14:40:59.468Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T14:40:59.468Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T14:40:59.468Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T14:40:59.469Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T14:40:59.469Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T14:40:59.469Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T14:40:59.469Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T14:40:59.469Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T14:40:59.471Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T14:40:59.471Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 14:41:12.354 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-130.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T14:27:41.366Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T14:27:41.371Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T14:27:41.371Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T14:27:41.372Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T14:27:41.372Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T14:27:41.372Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T14:27:41.372Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T14:27:41.372Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T14:27:41.372Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T14:27:41.372Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T14:27:41.372Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T14:27:41.372Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T14:27:41.372Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T14:27:41.372Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T14:27:41.373Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T14:27:41.373Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 14:41:20.968 E ns/openshift-monitoring pod/thanos-querier-5b4d55469b-4b8ph node/ip-10-0-136-206.us-west-2.compute.internal container/oauth-proxy container exited with code 2 (Error): 2020/04/08 14:27:19 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 14:27:19 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 14:27:19 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 14:27:19 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 14:27:19 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 14:27:19 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 14:27:19 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 14:27:19 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/08 14:27:19 http.go:107: HTTPS: listening on [::]:9091\nI0408 14:27:19.280671       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 14:41:21.407 E ns/openshift-monitoring pod/node-exporter-mqfcz node/ip-10-0-128-56.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:22:49Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:22:49Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 14:41:34.829 E ns/openshift-controller-manager pod/controller-manager-fgqns node/ip-10-0-145-206.us-west-2.compute.internal container/controller-manager container exited with code 137 (Error): I0408 14:40:58.902522       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0408 14:40:58.905539       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-mxwilb0y/stable@sha256:baf34611b723ba5e9b3ead8872fed2c8af700156096054d720d42a057f5f24be"\nI0408 14:40:58.905897       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-mxwilb0y/stable@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0408 14:40:58.905842       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0408 14:40:58.906794       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 08 14:41:34.965 E ns/openshift-controller-manager pod/controller-manager-rqfjn node/ip-10-0-129-31.us-west-2.compute.internal container/controller-manager container exited with code 137 (Error): I0408 14:40:53.895012       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0408 14:40:53.896646       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-mxwilb0y/stable@sha256:baf34611b723ba5e9b3ead8872fed2c8af700156096054d720d42a057f5f24be"\nI0408 14:40:53.896667       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-mxwilb0y/stable@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0408 14:40:53.896718       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0408 14:40:53.896768       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 08 14:41:35.262 E ns/openshift-controller-manager pod/controller-manager-hlkhm node/ip-10-0-131-146.us-west-2.compute.internal container/controller-manager container exited with code 137 (Error): I0408 14:40:44.571207       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0408 14:40:44.590786       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-mxwilb0y/stable@sha256:baf34611b723ba5e9b3ead8872fed2c8af700156096054d720d42a057f5f24be"\nI0408 14:40:44.590871       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-mxwilb0y/stable@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0408 14:40:44.590882       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0408 14:40:44.591001       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 08 14:41:40.276 E ns/openshift-monitoring pod/node-exporter-lzl7d node/ip-10-0-131-146.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:19:22Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:19:22Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 14:41:40.504 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-128-56.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 14:26:30 Watching directory: "/etc/alertmanager/config"\n
Apr 08 14:41:40.504 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-128-56.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 14:26:30 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 14:26:30 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 14:26:30 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 14:26:30 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 14:26:30 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 14:26:30 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 14:26:30 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 14:26:30 http.go:107: HTTPS: listening on [::]:9095\nI0408 14:26:30.417923       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 14:41:40.507 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-130.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T14:41:34.847Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T14:41:34.849Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T14:41:34.850Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T14:41:34.850Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T14:41:34.850Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T14:41:34.851Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T14:41:34.851Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T14:41:34.852Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T14:41:34.852Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 14:41:57.072 E ns/openshift-marketplace pod/redhat-marketplace-6cb94b8b89-xg4g7 node/ip-10-0-136-206.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Apr 08 14:41:58.893 E ns/openshift-console pod/console-8579d6b965-hr4vj node/ip-10-0-145-206.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-04-08T14:27:27Z cmd/main: cookies are secure!\n2020-04-08T14:27:27Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-08T14:27:37Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-08T14:27:47Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-08T14:27:57Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-08T14:28:07Z cmd/main: Binding to [::]:8443...\n2020-04-08T14:28:07Z cmd/main: using TLS\n
Apr 08 14:42:08.116 E ns/openshift-marketplace pod/certified-operators-6bd5dcd87d-pn4dt node/ip-10-0-136-206.us-west-2.compute.internal container/certified-operators container exited with code 2 (Error): 
Apr 08 14:42:12.469 E ns/openshift-console pod/console-8579d6b965-dxlqf node/ip-10-0-131-146.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-04-08T14:27:25Z cmd/main: cookies are secure!\n2020-04-08T14:27:25Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-04-08T14:27:35Z cmd/main: Binding to [::]:8443...\n2020-04-08T14:27:35Z cmd/main: using TLS\n
Apr 08 14:42:15.134 E ns/openshift-monitoring pod/node-exporter-bqctk node/ip-10-0-136-206.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:22:37Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:22:37Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 14:43:35.165 E ns/openshift-sdn pod/sdn-controller-8fwnw node/ip-10-0-145-206.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): 95       1 vnids.go:115] Allocated netid 7064274 for namespace "openshift-console"\nI0408 14:17:57.025266       1 vnids.go:115] Allocated netid 11645705 for namespace "openshift-console-operator"\nI0408 14:18:43.992954       1 vnids.go:115] Allocated netid 16213680 for namespace "openshift-ingress"\nI0408 14:22:03.687853       1 subnets.go:149] Created HostSubnet ip-10-0-136-206.us-west-2.compute.internal (host: "ip-10-0-136-206.us-west-2.compute.internal", ip: "10.0.136.206", subnet: "10.131.0.0/23")\nI0408 14:22:15.969586       1 subnets.go:149] Created HostSubnet ip-10-0-128-56.us-west-2.compute.internal (host: "ip-10-0-128-56.us-west-2.compute.internal", ip: "10.0.128.56", subnet: "10.128.2.0/23")\nI0408 14:22:21.625892       1 subnets.go:149] Created HostSubnet ip-10-0-154-130.us-west-2.compute.internal (host: "ip-10-0-154-130.us-west-2.compute.internal", ip: "10.0.154.130", subnet: "10.129.2.0/23")\nI0408 14:30:31.597197       1 vnids.go:115] Allocated netid 13729592 for namespace "e2e-k8s-service-lb-available-17"\nI0408 14:30:31.607791       1 vnids.go:115] Allocated netid 10330120 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-625"\nI0408 14:30:31.615548       1 vnids.go:115] Allocated netid 5541474 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-7533"\nI0408 14:30:31.622543       1 vnids.go:115] Allocated netid 5825267 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-9592"\nI0408 14:30:31.635978       1 vnids.go:115] Allocated netid 1398419 for namespace "e2e-control-plane-available-4257"\nI0408 14:30:31.648437       1 vnids.go:115] Allocated netid 14728612 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-5765"\nI0408 14:30:31.698056       1 vnids.go:115] Allocated netid 13123861 for namespace "e2e-frontend-ingress-available-1643"\nI0408 14:30:31.713917       1 vnids.go:115] Allocated netid 3807937 for namespace "e2e-k8s-sig-apps-job-upgrade-9935"\nI0408 14:30:31.726841       1 vnids.go:115] Allocated netid 1110664 for namespace "e2e-k8s-sig-apps-deployment-upgrade-8076"\n
Apr 08 14:43:45.007 E ns/openshift-sdn pod/sdn-d6l6n node/ip-10-0-154-130.us-west-2.compute.internal container/sdn container exited with code 255 (Error): penshift-image-registry/image-registry-69b687f4cb-g6kpm\nI0408 14:40:48.205654    2329 pod.go:539] CNI_DEL openshift-monitoring/grafana-ffc4db748-99wh4\nI0408 14:40:48.857544    2329 pod.go:503] CNI_ADD openshift-image-registry/image-registry-69b687f4cb-c8d8b got IP 10.129.2.40, ofport 41\nI0408 14:40:51.139629    2329 pod.go:539] CNI_DEL openshift-image-registry/image-registry-69b687f4cb-vbzd7\nI0408 14:40:52.992545    2329 pod.go:503] CNI_ADD openshift-image-registry/node-ca-58z56 got IP 10.129.2.41, ofport 42\nI0408 14:40:55.702961    2329 pod.go:539] CNI_DEL openshift-image-registry/node-ca-58z56\nI0408 14:40:57.589529    2329 pod.go:539] CNI_DEL openshift-monitoring/thanos-querier-5b4d55469b-rrdfd\nI0408 14:40:58.079703    2329 pod.go:503] CNI_ADD openshift-monitoring/thanos-querier-76f8d8969b-mpfrz got IP 10.129.2.42, ofport 43\nI0408 14:41:02.173582    2329 pod.go:503] CNI_ADD openshift-marketplace/certified-operators-55d85cd88b-tnsn6 got IP 10.129.2.43, ofport 44\nI0408 14:41:03.716419    2329 pod.go:503] CNI_ADD openshift-marketplace/community-operators-569c9c888b-xlq2r got IP 10.129.2.44, ofport 45\nI0408 14:41:04.452470    2329 pod.go:503] CNI_ADD openshift-marketplace/redhat-marketplace-6f4df77644-m47kr got IP 10.129.2.45, ofport 46\nI0408 14:41:09.902754    2329 pod.go:503] CNI_ADD openshift-image-registry/node-ca-58z56 got IP 10.129.2.46, ofport 47\nI0408 14:41:11.044377    2329 pod.go:539] CNI_DEL openshift-monitoring/prometheus-k8s-0\nI0408 14:41:19.610074    2329 pod.go:539] CNI_DEL openshift-monitoring/alertmanager-main-1\nI0408 14:41:29.918959    2329 pod.go:503] CNI_ADD openshift-monitoring/prometheus-k8s-0 got IP 10.129.2.47, ofport 48\nI0408 14:41:30.005416    2329 pod.go:503] CNI_ADD openshift-monitoring/alertmanager-main-1 got IP 10.129.2.48, ofport 49\nI0408 14:41:58.487607    2329 pod.go:539] CNI_DEL openshift-ingress/router-default-5855d78f6c-7s4lp\nF0408 14:43:43.859812    2329 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 14:44:03.574 E ns/openshift-multus pod/multus-jzwpc node/ip-10-0-129-31.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 14:44:03.582 E ns/openshift-multus pod/multus-admission-controller-9gnwc node/ip-10-0-131-146.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 14:44:04.592 E ns/openshift-sdn pod/sdn-controller-gqrsk node/ip-10-0-131-146.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): I0408 14:11:40.569817       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0408 14:15:48.653378       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: leader changed\nE0408 14:17:55.772790       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-mxwilb0y-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Apr 08 14:44:12.424 E ns/openshift-sdn pod/sdn-4s9nt node/ip-10-0-136-206.us-west-2.compute.internal container/sdn container exited with code 255 (Error): I0408 14:43:45.630051   81294 node.go:146] Initializing SDN node "ip-10-0-136-206.us-west-2.compute.internal" (10.0.136.206) of type "redhat/openshift-ovs-networkpolicy"\nI0408 14:43:45.634485   81294 cmd.go:151] Starting node networking (unknown)\nI0408 14:43:45.772261   81294 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 14:43:45.937836   81294 networkpolicy.go:330] SyncVNIDRules: 1 unused VNIDs\nI0408 14:43:45.938175   81294 proxy.go:103] Using unidling+iptables Proxier.\nI0408 14:43:45.938462   81294 proxy.go:129] Tearing down userspace rules.\nI0408 14:43:46.116709   81294 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 14:43:46.123279   81294 config.go:313] Starting service config controller\nI0408 14:43:46.123312   81294 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 14:43:46.123330   81294 config.go:131] Starting endpoints config controller\nI0408 14:43:46.123353   81294 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 14:43:46.123543   81294 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 14:43:46.223511   81294 shared_informer.go:204] Caches are synced for service config \nI0408 14:43:46.224873   81294 shared_informer.go:204] Caches are synced for endpoints config \nF0408 14:44:12.006041   81294 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 14:44:46.805 E ns/openshift-multus pod/multus-admission-controller-qw7wh node/ip-10-0-145-206.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 14:44:52.174 E ns/openshift-multus pod/multus-9wh7h node/ip-10-0-154-130.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 14:45:20.903 E ns/openshift-sdn pod/sdn-6r5h6 node/ip-10-0-131-146.us-west-2.compute.internal container/sdn container exited with code 255 (Error): I0408 14:44:24.863896   93624 node.go:146] Initializing SDN node "ip-10-0-131-146.us-west-2.compute.internal" (10.0.131.146) of type "redhat/openshift-ovs-networkpolicy"\nI0408 14:44:24.868035   93624 cmd.go:151] Starting node networking (unknown)\nI0408 14:44:24.973034   93624 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 14:44:25.070432   93624 proxy.go:103] Using unidling+iptables Proxier.\nI0408 14:44:25.070790   93624 proxy.go:129] Tearing down userspace rules.\nI0408 14:44:25.072774   93624 networkpolicy.go:330] SyncVNIDRules: 19 unused VNIDs\nI0408 14:44:25.250378   93624 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 14:44:25.259525   93624 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 14:44:25.259904   93624 config.go:313] Starting service config controller\nI0408 14:44:25.259925   93624 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 14:44:25.259945   93624 config.go:131] Starting endpoints config controller\nI0408 14:44:25.259953   93624 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 14:44:25.360032   93624 shared_informer.go:204] Caches are synced for endpoints config \nI0408 14:44:25.360032   93624 shared_informer.go:204] Caches are synced for service config \nF0408 14:45:20.299992   93624 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Apr 08 14:45:38.053 E ns/openshift-multus pod/multus-admission-controller-dgdkk node/ip-10-0-129-31.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 14:45:39.959 E ns/openshift-multus pod/multus-pjkqm node/ip-10-0-131-146.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 14:45:49.190 E ns/openshift-sdn pod/sdn-6lhrf node/ip-10-0-129-31.us-west-2.compute.internal container/sdn container exited with code 255 (Error): I0408 14:44:11.678001   94717 node.go:146] Initializing SDN node "ip-10-0-129-31.us-west-2.compute.internal" (10.0.129.31) of type "redhat/openshift-ovs-networkpolicy"\nI0408 14:44:11.682065   94717 cmd.go:151] Starting node networking (unknown)\nI0408 14:44:11.783874   94717 sdn_controller.go:137] [SDN setup] SDN is already set up\nI0408 14:44:11.876830   94717 proxy.go:103] Using unidling+iptables Proxier.\nI0408 14:44:11.877226   94717 proxy.go:129] Tearing down userspace rules.\nI0408 14:44:11.882268   94717 networkpolicy.go:330] SyncVNIDRules: 8 unused VNIDs\nI0408 14:44:12.064956   94717 proxy.go:95] Starting multitenant SDN proxy endpoint filter\nI0408 14:44:12.074016   94717 config.go:313] Starting service config controller\nI0408 14:44:12.074026   94717 config.go:131] Starting endpoints config controller\nI0408 14:44:12.074047   94717 shared_informer.go:197] Waiting for caches to sync for service config\nI0408 14:44:12.074052   94717 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI0408 14:44:12.074281   94717 proxy.go:229] Started Kubernetes Proxy on 0.0.0.0\nI0408 14:44:12.174174   94717 shared_informer.go:204] Caches are synced for service config \nI0408 14:44:12.174185   94717 shared_informer.go:204] Caches are synced for endpoints config \nI0408 14:45:37.166149   94717 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-dgdkk\nW0408 14:45:42.152165   94717 pod.go:274] CNI_ADD openshift-multus/multus-admission-controller-xcjf9 failed: exit status 1\nI0408 14:45:42.161611   94717 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-xcjf9\nI0408 14:45:42.198803   94717 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-xcjf9\nF0408 14:45:48.446722   94717 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Apr 08 14:46:35.812 E ns/openshift-multus pod/multus-cdwdp node/ip-10-0-136-206.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 14:47:28.404 E ns/openshift-multus pod/multus-fshq8 node/ip-10-0-145-206.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 14:48:16.538 E ns/openshift-multus pod/multus-q68c5 node/ip-10-0-128-56.us-west-2.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 08 14:48:55.381 E ns/openshift-machine-config-operator pod/machine-config-operator-864b8f4f77-5cf4p node/ip-10-0-131-146.us-west-2.compute.internal container/machine-config-operator container exited with code 2 (Error): ", Name:"machine-config", UID:"3f5f075f-ca61-4728-b4ef-8e56cb6e43d6", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-04-08-135115}]\nE0408 14:13:09.871503       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0408 14:13:09.878658       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0408 14:13:10.910171       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0408 14:13:14.465361       1 sync.go:61] [init mode] synced RenderConfig in 5.377640953s\nI0408 14:13:14.628128       1 sync.go:61] [init mode] synced MachineConfigPools in 116.269151ms\nI0408 14:13:34.031358       1 sync.go:61] [init mode] synced MachineConfigDaemon in 19.370546926s\nI0408 14:13:40.074675       1 sync.go:61] [init mode] synced MachineConfigController in 6.040751455s\nI0408 14:13:42.128479       1 sync.go:61] [init mode] synced MachineConfigServer in 2.051894749s\nI0408 14:16:11.135996       1 sync.go:61] [init mode] synced RequiredPools in 2m29.00580262s\nI0408 14:16:11.532511       1 sync.go:92] Initialization complete\nE0408 14:17:55.761052       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Apr 08 14:51:09.721 E ns/openshift-machine-config-operator pod/machine-config-daemon-ghflk node/ip-10-0-131-146.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 14:51:24.320 E ns/openshift-machine-config-operator pod/machine-config-daemon-xlx82 node/ip-10-0-136-206.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 14:51:30.897 E ns/openshift-machine-config-operator pod/machine-config-daemon-hx5rb node/ip-10-0-128-56.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 14:51:54.092 E ns/openshift-machine-config-operator pod/machine-config-daemon-xj4ps node/ip-10-0-154-130.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 14:52:04.878 E ns/openshift-machine-config-operator pod/machine-config-controller-5f6f847986-pksnd node/ip-10-0-131-146.us-west-2.compute.internal container/machine-config-controller container exited with code 2 (Error): g resource lock openshift-machine-config-operator/machine-config-controller: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config-controller: unexpected EOF\nI0408 14:23:37.741350       1 node_controller.go:452] Pool worker: node ip-10-0-128-56.us-west-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-3545ee71746ae54fb2b0c7c50f31f20c\nI0408 14:23:37.741384       1 node_controller.go:452] Pool worker: node ip-10-0-128-56.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-3545ee71746ae54fb2b0c7c50f31f20c\nI0408 14:23:37.741391       1 node_controller.go:452] Pool worker: node ip-10-0-128-56.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0408 14:24:04.991028       1 node_controller.go:452] Pool worker: node ip-10-0-136-206.us-west-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-3545ee71746ae54fb2b0c7c50f31f20c\nI0408 14:24:04.991057       1 node_controller.go:452] Pool worker: node ip-10-0-136-206.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-3545ee71746ae54fb2b0c7c50f31f20c\nI0408 14:24:04.991063       1 node_controller.go:452] Pool worker: node ip-10-0-136-206.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0408 14:24:13.184955       1 node_controller.go:452] Pool worker: node ip-10-0-154-130.us-west-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-3545ee71746ae54fb2b0c7c50f31f20c\nI0408 14:24:13.184979       1 node_controller.go:452] Pool worker: node ip-10-0-154-130.us-west-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-3545ee71746ae54fb2b0c7c50f31f20c\nI0408 14:24:13.184985       1 node_controller.go:452] Pool worker: node ip-10-0-154-130.us-west-2.compute.internal changed machineconfiguration.openshift.io/state = Done\n
Apr 08 14:53:39.435 E ns/openshift-machine-config-operator pod/machine-config-server-kldqk node/ip-10-0-145-206.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0408 14:13:40.822835       1 start.go:38] Version: machine-config-daemon-4.5.0-202004081016-2-g219a9427-dirty (219a942746ea04617729c708baed5d2c7dcb2716)\nI0408 14:13:40.823782       1 api.go:51] Launching server on :22624\nI0408 14:13:40.823823       1 api.go:51] Launching server on :22623\nI0408 14:19:47.608566       1 api.go:97] Pool worker requested by 10.0.154.22:1942\nI0408 14:19:48.483645       1 api.go:97] Pool worker requested by 10.0.154.22:23691\n
Apr 08 14:53:47.133 E ns/openshift-machine-config-operator pod/machine-config-server-x6vgx node/ip-10-0-131-146.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0408 14:13:40.880687       1 start.go:38] Version: machine-config-daemon-4.5.0-202004081016-2-g219a9427-dirty (219a942746ea04617729c708baed5d2c7dcb2716)\nI0408 14:13:40.881339       1 api.go:51] Launching server on :22624\nI0408 14:13:40.881502       1 api.go:51] Launching server on :22623\nI0408 14:19:52.109731       1 api.go:97] Pool worker requested by 10.0.143.118:16814\n
Apr 08 14:53:49.685 E ns/openshift-machine-config-operator pod/machine-config-server-k9xzz node/ip-10-0-129-31.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0408 14:15:46.182545       1 start.go:38] Version: machine-config-daemon-4.5.0-202004081016-2-g219a9427-dirty (219a942746ea04617729c708baed5d2c7dcb2716)\nI0408 14:15:46.189029       1 api.go:51] Launching server on :22624\nI0408 14:15:46.189129       1 api.go:51] Launching server on :22623\n
Apr 08 14:53:51.975 E ns/openshift-monitoring pod/kube-state-metrics-5fb4c74774-ggxw4 node/ip-10-0-154-130.us-west-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 08 14:53:51.992 E ns/openshift-monitoring pod/prometheus-adapter-9768c887b-ch66t node/ip-10-0-154-130.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0408 14:40:44.353141       1 adapter.go:93] successfully using in-cluster auth\nI0408 14:40:45.167862       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 08 14:53:52.060 E ns/openshift-marketplace pod/community-operators-569c9c888b-xlq2r node/ip-10-0-154-130.us-west-2.compute.internal container/community-operators container exited with code 2 (Error): 
Apr 08 14:53:53.010 E ns/openshift-monitoring pod/thanos-querier-76f8d8969b-mpfrz node/ip-10-0-154-130.us-west-2.compute.internal container/oauth-proxy container exited with code 2 (Error): 2020/04/08 14:41:06 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 14:41:06 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 14:41:06 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 14:41:06 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 14:41:06 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 14:41:06 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/04/08 14:41:06 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 14:41:06 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/08 14:41:06 http.go:107: HTTPS: listening on [::]:9091\nI0408 14:41:06.263061       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 14:53:53.026 E ns/openshift-marketplace pod/redhat-marketplace-6f4df77644-m47kr node/ip-10-0-154-130.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Apr 08 14:53:53.189 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-130.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T14:41:34.847Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T14:41:34.849Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T14:41:34.850Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T14:41:34.850Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T14:41:34.850Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T14:41:34.851Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T14:41:34.851Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T14:41:34.851Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T14:41:34.852Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T14:41:34.852Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 14:53:53.189 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-130.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-monitoring_prometheus-k8s-0_2aa6c977-5018-421c-b3d7-18014a0feb13/prometheus-config-reloader/0.log": lstat /var/log/pods/openshift-monitoring_prometheus-k8s-0_2aa6c977-5018-421c-b3d7-18014a0feb13/prometheus-config-reloader/0.log: no such file or directory
Apr 08 14:53:56.657 E ns/openshift-machine-api pod/machine-api-operator-867dc5555f-dqxwn node/ip-10-0-145-206.us-west-2.compute.internal container/machine-api-operator container exited with code 2 (Error): 
Apr 08 14:54:00.850 E ns/openshift-machine-api pod/machine-api-controllers-7b49c4d9c-9csxr node/ip-10-0-145-206.us-west-2.compute.internal container/machineset-controller container exited with code 1 (Error): 
Apr 08 14:54:17.008 E ns/openshift-console pod/console-69bb968c96-h5psd node/ip-10-0-145-206.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-04-08T14:41:38Z cmd/main: cookies are secure!\n2020-04-08T14:41:38Z cmd/main: Binding to [::]:8443...\n2020-04-08T14:41:38Z cmd/main: using TLS\n
Apr 08 14:54:25.230 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-136-206.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T14:54:18.768Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T14:54:18.774Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T14:54:18.775Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T14:54:18.776Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T14:54:18.776Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T14:54:18.776Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T14:54:18.776Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T14:54:18.776Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T14:54:18.776Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T14:54:18.776Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T14:54:18.776Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T14:54:18.776Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T14:54:18.776Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T14:54:18.776Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T14:54:18.777Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T14:54:18.777Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 14:54:36.148 E ns/e2e-k8s-service-lb-available-17 pod/service-test-p5nwn node/ip-10-0-154-130.us-west-2.compute.internal container/netexec container exited with code 2 (Error): 
Apr 08 14:55:54.919 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Apr 08 14:56:25.319 E ns/openshift-cluster-node-tuning-operator pod/tuned-2jfqd node/ip-10-0-154-130.us-west-2.compute.internal container/tuned container exited with code 143 (Error): go:169] disabling system tuned...\nI0408 14:41:43.505442   62435 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0408 14:41:43.527208   62435 tuned.go:513] tuned "rendered" added\nI0408 14:41:43.527234   62435 tuned.go:218] extracting tuned profiles\nI0408 14:41:44.449721   62435 tuned.go:392] getting recommended profile...\nI0408 14:41:44.581274   62435 tuned.go:419] active profile () != recommended profile (openshift-node)\nI0408 14:41:44.581689   62435 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 14:41:44.581728   62435 tuned.go:285] starting tuned...\n2020-04-08 14:41:44,690 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 14:41:44,697 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 14:41:44,697 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 14:41:44,697 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-08 14:41:44,698 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-08 14:41:44,731 INFO     tuned.daemon.controller: starting controller\n2020-04-08 14:41:44,731 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 14:41:44,741 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 14:41:44,742 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 14:41:44,745 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 14:41:44,747 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 14:41:44,748 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 14:41:44,862 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 14:41:44,871 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Apr 08 14:56:25.326 E ns/openshift-monitoring pod/node-exporter-vjc58 node/ip-10-0-154-130.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:42:13Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:42:13Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 14:56:25.342 E ns/openshift-sdn pod/ovs-rgtxb node/ip-10-0-154-130.us-west-2.compute.internal container/openvswitch container exited with code 1 (Error):  the last 0 s (2 deletes)\n2020-04-08T14:53:51.843Z|00099|connmgr|INFO|br0<->unix#541: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:53:51.883Z|00100|bridge|INFO|bridge br0: deleted interface veth7b0d80a0 on port 21\n2020-04-08T14:53:51.942Z|00101|connmgr|INFO|br0<->unix#544: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:53:52.006Z|00102|connmgr|INFO|br0<->unix#547: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:53:52.043Z|00103|bridge|INFO|bridge br0: deleted interface veth31827b2b on port 49\n2020-04-08T14:53:52.098Z|00104|connmgr|INFO|br0<->unix#550: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:53:52.169Z|00105|connmgr|INFO|br0<->unix#554: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:53:52.201Z|00106|bridge|INFO|bridge br0: deleted interface veth1e65ee99 on port 46\n2020-04-08T14:53:52.271Z|00107|connmgr|INFO|br0<->unix#557: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:53:52.345Z|00108|connmgr|INFO|br0<->unix#562: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:53:52.381Z|00109|bridge|INFO|bridge br0: deleted interface veth238063e1 on port 48\n2020-04-08T14:53:52.431Z|00110|connmgr|INFO|br0<->unix#565: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:53:52.495Z|00111|connmgr|INFO|br0<->unix#568: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:53:52.529Z|00112|bridge|INFO|bridge br0: deleted interface vethf0f5bf80 on port 43\n2020-04-08T14:54:35.363Z|00113|connmgr|INFO|br0<->unix#602: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:54:35.393Z|00114|connmgr|INFO|br0<->unix#605: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:54:35.422Z|00115|bridge|INFO|bridge br0: deleted interface vethf5138bc8 on port 18\n2020-04-08T14:54:35.409Z|00011|jsonrpc|WARN|unix#517: receive error: Connection reset by peer\n2020-04-08T14:54:35.409Z|00012|reconnect|WARN|unix#517: connection dropped (Connection reset by peer)\n2020-04-08 14:54:40 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 14:56:25.392 E ns/openshift-multus pod/multus-fshw9 node/ip-10-0-154-130.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 14:56:25.410 E ns/openshift-machine-config-operator pod/machine-config-daemon-dt6kq node/ip-10-0-154-130.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 14:56:34.535 E ns/openshift-machine-config-operator pod/machine-config-daemon-dt6kq node/ip-10-0-154-130.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 14:56:51.086 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-56.us-west-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/04/08 14:41:06 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Apr 08 14:56:51.086 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-56.us-west-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/04/08 14:41:06 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 14:41:06 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 14:41:06 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 14:41:06 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/04/08 14:41:06 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 14:41:06 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/04/08 14:41:06 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 14:41:06 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/04/08 14:41:06 http.go:107: HTTPS: listening on [::]:9091\nI0408 14:41:06.896000       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/04/08 14:53:59 oauthproxy.go:774: basicauth: 10.131.0.23:48016 Authorization header does not start with 'Basic', skipping basic authentication\n
Apr 08 14:56:51.086 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-128-56.us-west-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-04-08T14:41:02.948881163Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-04-08T14:41:02.951000845Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-04-08T14:41:08.089451001Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-04-08T14:41:08.089523558Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Apr 08 14:56:51.346 E ns/openshift-kube-storage-version-migrator pod/migrator-6c69649c48-htx78 node/ip-10-0-128-56.us-west-2.compute.internal container/migrator container exited with code 2 (Error): 
Apr 08 14:56:56.428 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-145-206.us-west-2.compute.internal" not ready since 2020-04-08 14:55:00 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: ip-10-0-145-206.us-west-2.compute.internal,ip-10-0-129-31.us-west-2.compute.internal members are unhealthy,  members are unknown
Apr 08 14:57:00.079 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-145-206.us-west-2.compute.internal" not ready since 2020-04-08 14:55:00 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Apr 08 14:57:00.086 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-145-206.us-west-2.compute.internal" not ready since 2020-04-08 14:55:00 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Apr 08 14:57:00.089 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-145-206.us-west-2.compute.internal" not ready since 2020-04-08 14:55:00 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Apr 08 14:57:04.029 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-206.us-west-2.compute.internal node/ip-10-0-145-206.us-west-2.compute.internal container/cluster-policy-controller container exited with code 1 (Error): formers/factory.go:135: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps)\nE0408 14:40:52.771484       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0408 14:40:52.771509       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0408 14:40:52.771525       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0408 14:40:52.771542       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: unknown (get ingresses.networking.k8s.io)\nW0408 14:53:51.006289       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 431; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 14:53:51.006479       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 509; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 14:53:51.006428       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 531; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 14:53:51.006464       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 465; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 08 14:57:04.029 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-206.us-west-2.compute.internal node/ip-10-0-145-206.us-west-2.compute.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 7044       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:53:56.257357       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:53:56.729863       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:53:56.730189       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:54:06.267708       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:54:06.267956       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:54:06.743285       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:54:06.743644       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:54:16.277085       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:54:16.277340       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:54:16.756315       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:54:16.756690       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:54:26.284860       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:54:26.285170       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 08 14:57:04.029 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-145-206.us-west-2.compute.internal node/ip-10-0-145-206.us-west-2.compute.internal container/kube-controller-manager container exited with code 2 (Error): TC to 2030-04-06 13:59:46 +0000 UTC (now=2020-04-08 14:37:01.854911182 +0000 UTC))\nI0408 14:37:01.855009       1 tlsconfig.go:178] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-04-08 13:59:50 +0000 UTC to 2020-04-09 13:59:50 +0000 UTC (now=2020-04-08 14:37:01.854997069 +0000 UTC))\nI0408 14:37:01.855334       1 tlsconfig.go:200] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1586355188" (2020-04-08 14:13:20 +0000 UTC to 2022-04-08 14:13:21 +0000 UTC (now=2020-04-08 14:37:01.855317987 +0000 UTC))\nI0408 14:37:01.855587       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1586356621" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1586356621" (2020-04-08 13:37:01 +0000 UTC to 2021-04-08 13:37:01 +0000 UTC (now=2020-04-08 14:37:01.855575797 +0000 UTC))\nI0408 14:37:01.855639       1 secure_serving.go:178] Serving securely on [::]:10257\nI0408 14:37:01.855688       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0408 14:37:01.855735       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nE0408 14:40:52.842075       1 leaderelection.go:320] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system"\n
Apr 08 14:57:04.068 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-145-206.us-west-2.compute.internal node/ip-10-0-145-206.us-west-2.compute.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:54:07.446358       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:54:07.446479       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:54:09.464467       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:54:09.464567       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:54:11.470633       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:54:11.470668       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:54:13.479106       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:54:13.479130       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:54:15.486035       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:54:15.486060       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:54:17.514260       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:54:17.514284       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:54:19.522516       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:54:19.522568       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:54:21.509551       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:54:21.509573       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:54:23.523809       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:54:23.523831       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:54:25.530566       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:54:25.530590       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 08 14:57:04.068 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-145-206.us-west-2.compute.internal node/ip-10-0-145-206.us-west-2.compute.internal container/kube-scheduler container exited with code 2 (Error): uler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"\nE0408 14:40:52.886498       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"\nE0408 14:40:52.886792       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0408 14:40:52.886880       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0408 14:40:52.886919       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0408 14:40:52.886967       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0408 14:40:52.887009       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0408 14:40:52.887041       1 reflector.go:380] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0408 14:40:52.887090       1 reflector.go:380] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to watch *v1.Pod: unknown (get pods)\nE0408 14:41:10.466937       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 33cdf20c-dbd8-496d-96e7-ef161cb9cf40 is not added to scheduler cache, so cannot be updated\nE0408 14:41:11.074529       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 33cdf20c-dbd8-496d-96e7-ef161cb9cf40 is not added to scheduler cache, so cannot be updated\nE0408 14:41:11.888166       1 eventhandlers.go:256] scheduler cache UpdatePod failed: pod 33cdf20c-dbd8-496d-96e7-ef161cb9cf40 is not added to scheduler cache, so cannot be updated\n
Apr 08 14:57:04.108 E ns/openshift-monitoring pod/node-exporter-zfvjt node/ip-10-0-145-206.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:40:57Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:40:57Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 14:57:04.129 E ns/openshift-cluster-node-tuning-operator pod/tuned-9gd5b node/ip-10-0-145-206.us-west-2.compute.internal container/tuned container exited with code 143 (Error): 4:41:01.848368   81265 tuned.go:392] getting recommended profile...\nI0408 14:41:02.013318   81265 tuned.go:419] active profile () != recommended profile (openshift-control-plane)\nI0408 14:41:02.013418   81265 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 14:41:02.013481   81265 tuned.go:285] starting tuned...\n2020-04-08 14:41:02,177 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 14:41:02,192 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 14:41:02,192 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 14:41:02,193 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-08 14:41:02,194 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-08 14:41:02,266 INFO     tuned.daemon.controller: starting controller\n2020-04-08 14:41:02,266 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 14:41:02,277 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 14:41:02,278 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 14:41:02,281 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 14:41:02,282 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-04-08 14:41:02,284 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-04-08 14:41:02,363 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 14:41:02,381 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n2020-04-08 14:54:26,393 INFO     tuned.daemon.controller: terminating controller\n2020-04-08 14:54:26,393 INFO     tuned.daemon.daemon: stopping tuning\nI0408 14:54:26.393988   81265 tuned.go:114] received signal: terminated\nI0408 14:54:26.394020   81265 tuned.go:326] sending TERM to PID 81467\n
Apr 08 14:57:04.156 E ns/openshift-controller-manager pod/controller-manager-vpshg node/ip-10-0-145-206.us-west-2.compute.internal container/controller-manager container exited with code 1 (Error): I0408 14:41:40.977026       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0408 14:41:40.978752       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-mxwilb0y/stable@sha256:baf34611b723ba5e9b3ead8872fed2c8af700156096054d720d42a057f5f24be"\nI0408 14:41:40.978839       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-mxwilb0y/stable@sha256:19880395f98981bdfd98ffbfc9e4e878aa085ecf1e91f2073c24679545e41478"\nI0408 14:41:40.978944       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0408 14:41:40.979683       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Apr 08 14:57:04.173 E ns/openshift-sdn pod/sdn-controller-xsl64 node/ip-10-0-145-206.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): I0408 14:43:40.742724       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0408 14:43:40.761105       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"9e21f329-08cf-4c18-86c1-3e7be910afa4", ResourceVersion:"37485", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721951899, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-145-206\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-08T14:11:39Z\",\"renewTime\":\"2020-04-08T14:43:40Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"openshift-sdn-controller", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00034ba00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00034ba20)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-145-206 became leader'\nI0408 14:43:40.761185       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0408 14:43:40.764580       1 master.go:51] Initializing SDN master\nI0408 14:43:40.777438       1 network_controller.go:61] Started OpenShift Network Controller\n
Apr 08 14:57:04.188 E ns/openshift-multus pod/multus-admission-controller-wkzpq node/ip-10-0-145-206.us-west-2.compute.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 08 14:57:04.199 E ns/openshift-sdn pod/ovs-rgddp node/ip-10-0-145-206.us-west-2.compute.internal container/openvswitch container exited with code 1 (Error): 1.544Z|00138|bridge|INFO|bridge br0: deleted interface veth26ff6f10 on port 61\n2020-04-08T14:54:01.614Z|00139|connmgr|INFO|br0<->unix#580: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:54:01.670Z|00140|connmgr|INFO|br0<->unix#583: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:54:01.704Z|00141|bridge|INFO|bridge br0: deleted interface vethcbc270b8 on port 57\n2020-04-08T14:54:05.596Z|00142|bridge|INFO|bridge br0: added interface vetheab1fb0d on port 72\n2020-04-08T14:54:05.634Z|00143|connmgr|INFO|br0<->unix#588: 5 flow_mods in the last 0 s (5 adds)\n2020-04-08T14:54:05.693Z|00144|connmgr|INFO|br0<->unix#592: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-08T14:54:05.694Z|00145|connmgr|INFO|br0<->unix#594: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:54:09.021Z|00146|connmgr|INFO|br0<->unix#598: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:54:09.075Z|00147|connmgr|INFO|br0<->unix#601: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:54:09.129Z|00148|bridge|INFO|bridge br0: deleted interface vetheab1fb0d on port 72\n2020-04-08T14:54:16.380Z|00149|connmgr|INFO|br0<->unix#610: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:54:16.407Z|00150|connmgr|INFO|br0<->unix#613: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:54:16.431Z|00151|bridge|INFO|bridge br0: deleted interface vethaba8c597 on port 63\n2020-04-08T14:54:16.679Z|00152|connmgr|INFO|br0<->unix#616: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:54:16.705Z|00153|connmgr|INFO|br0<->unix#619: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:54:16.726Z|00154|bridge|INFO|bridge br0: deleted interface vetha8cd3604 on port 67\n2020-04-08 14:54:26 info: Saving flows ...\n2020-04-08T14:54:26Z|00001|jsonrpc|WARN|unix:/var/run/openvswitch/db.sock: receive error: Connection reset by peer\n2020-04-08T14:54:26Z|00002|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection dropped (Connection reset by peer)\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection reset by peer)\n
Apr 08 14:57:04.228 E ns/openshift-multus pod/multus-cvg62 node/ip-10-0-145-206.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 14:57:04.253 E ns/openshift-machine-config-operator pod/machine-config-daemon-jtnrd node/ip-10-0-145-206.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 14:57:04.264 E ns/openshift-machine-config-operator pod/machine-config-server-2jzqd node/ip-10-0-145-206.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0408 14:53:46.167269       1 start.go:38] Version: machine-config-daemon-4.5.0-202004081016-2-g219a9427-dirty (219a942746ea04617729c708baed5d2c7dcb2716)\nI0408 14:53:46.168143       1 api.go:51] Launching server on :22624\nI0408 14:53:46.168190       1 api.go:51] Launching server on :22623\n
Apr 08 14:57:07.976 E ns/openshift-etcd pod/etcd-ip-10-0-145-206.us-west-2.compute.internal node/ip-10-0-145-206.us-west-2.compute.internal container/etcd-metrics container exited with code 2 (Error): us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-145-206.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T14:35:47.150Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T14:35:47.151Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-145-206.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-145-206.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"warn","ts":"2020-04-08T14:35:47.154Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.145.206:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.145.206:9978: connect: connection refused\". Reconnecting..."}\n{"level":"info","ts":"2020-04-08T14:35:47.155Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-08T14:35:47.155Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T14:35:47.155Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-08T14:35:48.155Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.145.206:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.145.206:9978: connect: connection refused\". Reconnecting..."}\n
Apr 08 14:57:08.055 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-206.us-west-2.compute.internal node/ip-10-0-145-206.us-west-2.compute.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0408 14:40:49.472769       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 08 14:57:08.055 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-145-206.us-west-2.compute.internal node/ip-10-0-145-206.us-west-2.compute.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 14:54:10.055193       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:54:10.055724       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 14:54:20.080823       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:54:20.081203       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 08 14:57:08.758 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-154-130.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T14:57:06.419Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T14:57:06.420Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T14:57:06.422Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T14:57:06.423Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T14:57:06.424Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T14:57:06.424Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T14:57:06.424Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T14:57:06.424Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T14:57:06.424Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T14:57:06.424Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T14:57:06.424Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T14:57:06.424Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T14:57:06.424Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T14:57:06.424Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T14:57:06.426Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T14:57:06.426Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 14:57:15.915 E ns/openshift-machine-config-operator pod/machine-config-daemon-jtnrd node/ip-10-0-145-206.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 14:57:18.975 E ns/e2e-k8s-sig-apps-job-upgrade-9935 pod/foo-rcths node/ip-10-0-128-56.us-west-2.compute.internal container/c container exited with code 137 (Error): 
Apr 08 14:57:18.986 E ns/e2e-k8s-sig-apps-job-upgrade-9935 pod/foo-pnx2s node/ip-10-0-128-56.us-west-2.compute.internal container/c container exited with code 137 (Error): 
Apr 08 14:57:31.846 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Apr 08 14:57:35.018 E ns/e2e-k8s-service-lb-available-17 pod/service-test-xnrm8 node/ip-10-0-128-56.us-west-2.compute.internal container/netexec container exited with code 2 (Error): 
Apr 08 14:57:58.368 E ns/openshift-console pod/console-69bb968c96-m9x4r node/ip-10-0-129-31.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-04-08T14:41:27Z cmd/main: cookies are secure!\n2020-04-08T14:41:27Z cmd/main: Binding to [::]:8443...\n2020-04-08T14:41:27Z cmd/main: using TLS\n
Apr 08 14:58:19.150 E kube-apiserver Kube API started failing: Get https://api.ci-op-mxwilb0y-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 08 14:58:21.351 E clusteroperator/insights changed Degraded to True: PeriodicGatherFailed: Source config could not be retrieved: Get https://172.30.0.1:443/apis/config.openshift.io/v1/clusteroperators: unexpected EOF
Apr 08 14:59:11.989 E ns/openshift-marketplace pod/redhat-marketplace-6f4df77644-v8hq5 node/ip-10-0-136-206.us-west-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Apr 08 14:59:31.039 E ns/openshift-marketplace pod/certified-operators-55d85cd88b-2bfgt node/ip-10-0-136-206.us-west-2.compute.internal container/certified-operators container exited with code 2 (Error): 
Apr 08 14:59:39.069 E ns/openshift-marketplace pod/community-operators-569c9c888b-96slw node/ip-10-0-136-206.us-west-2.compute.internal container/community-operators container exited with code 2 (Error): 
Apr 08 14:59:44.529 E ns/openshift-cluster-node-tuning-operator pod/tuned-vjt8p node/ip-10-0-128-56.us-west-2.compute.internal container/tuned container exited with code 143 (Error): ervice does not exist.\nI0408 14:41:28.628154   51624 tuned.go:392] getting recommended profile...\nI0408 14:41:28.753560   51624 tuned.go:419] active profile () != recommended profile (openshift-node)\nI0408 14:41:28.753618   51624 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 14:41:28.753653   51624 tuned.go:285] starting tuned...\n2020-04-08 14:41:28,866 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 14:41:28,872 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 14:41:28,873 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 14:41:28,873 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-08 14:41:28,874 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-08 14:41:28,908 INFO     tuned.daemon.controller: starting controller\n2020-04-08 14:41:28,908 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 14:41:28,920 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 14:41:28,921 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 14:41:28,924 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 14:41:28,925 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 14:41:28,927 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 14:41:29,063 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 14:41:29,070 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-04-08 14:58:00,099 INFO     tuned.daemon.controller: terminating controller\n2020-04-08 14:58:00,099 INFO     tuned.daemon.daemon: stopping tuning\nI0408 14:58:00.099607   51624 tuned.go:114] received signal: terminated\nI0408 14:58:00.099654   51624 tuned.go:326] sending TERM to PID 51650\n
Apr 08 14:59:44.546 E ns/openshift-monitoring pod/node-exporter-bkkhf node/ip-10-0-128-56.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:41:38Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:38Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 14:59:44.565 E ns/openshift-sdn pod/ovs-g8jpr node/ip-10-0-128-56.us-west-2.compute.internal container/openvswitch container exited with code 1 (Error): ce veth96376482 on port 33\n2020-04-08T14:56:50.468Z|00104|connmgr|INFO|br0<->unix#654: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:56:50.512Z|00105|connmgr|INFO|br0<->unix#657: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:56:50.545Z|00106|bridge|INFO|bridge br0: deleted interface vetha3e989fc on port 23\n2020-04-08T14:57:18.543Z|00107|connmgr|INFO|br0<->unix#681: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:57:18.574Z|00108|connmgr|INFO|br0<->unix#684: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:57:18.595Z|00109|bridge|INFO|bridge br0: deleted interface vethd603b7c0 on port 19\n2020-04-08T14:57:18.627Z|00110|connmgr|INFO|br0<->unix#687: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:57:18.663Z|00111|connmgr|INFO|br0<->unix#690: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:57:18.686Z|00112|bridge|INFO|bridge br0: deleted interface veth13b48183 on port 17\n2020-04-08T14:57:34.088Z|00113|connmgr|INFO|br0<->unix#702: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:57:34.119Z|00114|connmgr|INFO|br0<->unix#705: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:57:34.143Z|00115|bridge|INFO|bridge br0: deleted interface vethed9e0f11 on port 32\n2020-04-08T14:57:35.677Z|00116|connmgr|INFO|br0<->unix#708: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:57:35.712Z|00117|connmgr|INFO|br0<->unix#711: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:57:35.734Z|00118|bridge|INFO|bridge br0: deleted interface vethd24fe91a on port 26\n2020-04-08T14:57:35.723Z|00009|jsonrpc|WARN|unix#612: receive error: Connection reset by peer\n2020-04-08T14:57:35.723Z|00010|reconnect|WARN|unix#612: connection dropped (Connection reset by peer)\n2020-04-08T14:57:35.728Z|00011|jsonrpc|WARN|unix#613: receive error: Connection reset by peer\n2020-04-08T14:57:35.728Z|00012|reconnect|WARN|unix#613: connection dropped (Connection reset by peer)\n2020-04-08 14:58:00 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 14:59:44.614 E ns/openshift-multus pod/multus-mm7jn node/ip-10-0-128-56.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 14:59:44.630 E ns/openshift-machine-config-operator pod/machine-config-daemon-xtrsp node/ip-10-0-128-56.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 14:59:53.172 E ns/openshift-machine-config-operator pod/machine-config-daemon-xtrsp node/ip-10-0-128-56.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 15:00:03.518 E ns/openshift-monitoring pod/prometheus-adapter-9768c887b-2qpgp node/ip-10-0-136-206.us-west-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0408 14:40:28.864058       1 adapter.go:93] successfully using in-cluster auth\nI0408 14:40:29.504129       1 secure_serving.go:116] Serving securely on [::]:6443\n
Apr 08 15:00:03.562 E ns/openshift-monitoring pod/kube-state-metrics-5fb4c74774-5zzpv node/ip-10-0-136-206.us-west-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 08 15:00:03.621 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-7b6bd866f5-kkx5c node/ip-10-0-136-206.us-west-2.compute.internal container/operator container exited with code 255 (Error):  m=+264.001417903\nI0408 14:58:21.937134       1 operator.go:147] Finished syncing operator at 593.852818ms\nI0408 14:58:21.937186       1 operator.go:145] Starting syncing operator at 2020-04-08 14:58:21.937182271 +0000 UTC m=+264.595326447\nI0408 14:58:22.537650       1 operator.go:147] Finished syncing operator at 600.458461ms\nI0408 14:58:41.008380       1 operator.go:145] Starting syncing operator at 2020-04-08 14:58:41.008368442 +0000 UTC m=+283.666512690\nI0408 14:58:41.029071       1 operator.go:147] Finished syncing operator at 20.694145ms\nI0408 14:58:41.029129       1 operator.go:145] Starting syncing operator at 2020-04-08 14:58:41.029121805 +0000 UTC m=+283.687266100\nI0408 14:58:41.058209       1 operator.go:147] Finished syncing operator at 29.073426ms\nI0408 14:58:41.799794       1 operator.go:145] Starting syncing operator at 2020-04-08 14:58:41.799780934 +0000 UTC m=+284.457925276\nI0408 14:58:41.819404       1 operator.go:147] Finished syncing operator at 19.61752ms\nI0408 14:58:41.901116       1 operator.go:145] Starting syncing operator at 2020-04-08 14:58:41.901096744 +0000 UTC m=+284.559241000\nI0408 14:58:41.919536       1 operator.go:147] Finished syncing operator at 18.434019ms\nI0408 14:58:41.998389       1 operator.go:145] Starting syncing operator at 2020-04-08 14:58:41.998381963 +0000 UTC m=+284.656526207\nI0408 14:58:42.018036       1 operator.go:147] Finished syncing operator at 19.648263ms\nI0408 14:58:42.099411       1 operator.go:145] Starting syncing operator at 2020-04-08 14:58:42.099404822 +0000 UTC m=+284.757548987\nI0408 14:58:42.618010       1 operator.go:147] Finished syncing operator at 518.595638ms\nI0408 15:00:00.802397       1 operator.go:145] Starting syncing operator at 2020-04-08 15:00:00.802385085 +0000 UTC m=+363.460529453\nI0408 15:00:00.932735       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0408 15:00:00.933643       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0408 15:00:00.933672       1 builder.go:210] server exited\n
Apr 08 15:00:04.574 E ns/openshift-monitoring pod/telemeter-client-9dd57bf8c-l5rpt node/ip-10-0-136-206.us-west-2.compute.internal container/telemeter-client container exited with code 2 (Error): 
Apr 08 15:00:04.574 E ns/openshift-monitoring pod/telemeter-client-9dd57bf8c-l5rpt node/ip-10-0-136-206.us-west-2.compute.internal container/reload container exited with code 2 (Error): 
Apr 08 15:00:04.660 E ns/openshift-monitoring pod/grafana-7fd498c58b-5vr2j node/ip-10-0-136-206.us-west-2.compute.internal container/grafana container exited with code 1 (Error): 
Apr 08 15:00:04.660 E ns/openshift-monitoring pod/grafana-7fd498c58b-5vr2j node/ip-10-0-136-206.us-west-2.compute.internal container/grafana-proxy container exited with code 2 (Error): 
Apr 08 15:00:04.711 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-136-206.us-west-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/04/08 14:41:10 Watching directory: "/etc/alertmanager/config"\n
Apr 08 15:00:04.711 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-136-206.us-west-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/04/08 14:41:11 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 14:41:11 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/04/08 14:41:11 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/04/08 14:41:11 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/04/08 14:41:11 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/04/08 14:41:11 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/04/08 14:41:11 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/04/08 14:41:11 http.go:107: HTTPS: listening on [::]:9095\nI0408 14:41:11.146058       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 08 15:00:17.513 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-56.us-west-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-04-08T15:00:15.748Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-04-08T15:00:15.751Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-04-08T15:00:15.752Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-04-08T15:00:15.753Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-04-08T15:00:15.753Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-04-08T15:00:15.753Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-04-08T15:00:15.753Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-04-08T15:00:15.753Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-04-08T15:00:15.753Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-04-08T15:00:15.753Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-04-08T15:00:15.753Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-04-08T15:00:15.753Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-04-08T15:00:15.753Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-04-08T15:00:15.753Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-04-08T15:00:15.753Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-04-08T15:00:15.753Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-04-08
Apr 08 15:00:38.896 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-31.us-west-2.compute.internal node/ip-10-0-129-31.us-west-2.compute.internal container/cluster-policy-controller container exited with code 1 (Error): I0408 14:35:52.222959       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0408 14:35:52.226746       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0408 14:35:52.230118       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0408 14:35:52.230623       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Apr 08 15:00:38.896 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-31.us-west-2.compute.internal node/ip-10-0-129-31.us-west-2.compute.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 6613       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:57:36.997161       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:57:45.779888       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:57:45.780190       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:57:47.004640       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:57:47.005684       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:57:55.785809       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:57:55.786100       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:57:57.011020       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:57:57.011275       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:58:05.794046       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:58:05.794300       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 14:58:07.019036       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:58:07.019278       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 08 15:00:38.896 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-129-31.us-west-2.compute.internal node/ip-10-0-129-31.us-west-2.compute.internal container/kube-controller-manager container exited with code 2 (Error): lifecycle-manager", Name:"packageserver-7ddccff545", UID:"0585b15f-ad5c-4eca-85cc-e38500a52450", APIVersion:"apps/v1", ResourceVersion:"46624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-7ddccff545-2hzq4\nI0408 14:57:48.584823       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-7857c6b78b", UID:"dfdef334-b541-45bb-8aab-d265fe783ccb", APIVersion:"apps/v1", ResourceVersion:"46629", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-7857c6b78b-chtvl\nI0408 14:57:48.645193       1 deployment_controller.go:485] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\nW0408 14:57:51.678699       1 garbagecollector.go:644] failed to discover some groups: map[packages.operators.coreos.com/v1:the server is currently unable to handle the request]\nI0408 14:57:51.730317       1 garbagecollector.go:409] processing item [v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-operator-lock, uid: 9a0b00eb-6c0f-46f6-97f4-a03e3b29121a]\nI0408 14:57:51.732535       1 garbagecollector.go:522] delete object [v1/ConfigMap, namespace: openshift-marketplace, name: marketplace-operator-lock, uid: 9a0b00eb-6c0f-46f6-97f4-a03e3b29121a] with propagation policy Background\nI0408 14:58:02.586624       1 replica_set.go:562] Too few replicas for ReplicaSet openshift-cluster-version/cluster-version-operator-6688cd7857, need 1, creating 1\nI0408 14:58:02.620535       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-cluster-version", Name:"cluster-version-operator-6688cd7857", UID:"7f9d528e-47ef-4dbe-962b-71ab8ef6977a", APIVersion:"apps/v1", ResourceVersion:"46188", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cluster-version-operator-6688cd7857-b2xcf\n
Apr 08 15:00:38.911 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-129-31.us-west-2.compute.internal node/ip-10-0-129-31.us-west-2.compute.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:57:52.594301       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:57:52.594394       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:57:54.601850       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:57:54.601868       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:57:56.610600       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:57:56.610995       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:57:58.617446       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:57:58.617863       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:58:00.623575       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:58:00.623603       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:58:02.642111       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:58:02.642133       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:58:04.653289       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:58:04.653314       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:58:06.660168       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:58:06.660188       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:58:08.676170       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:58:08.676262       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 14:58:10.688646       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 14:58:10.688669       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 08 15:00:38.911 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-129-31.us-west-2.compute.internal node/ip-10-0-129-31.us-west-2.compute.internal container/kube-scheduler container exited with code 2 (Error): ing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 14:57:52.556752       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-55d7794bc6-ml2tv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 14:57:57.321668       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-55d7794bc6-ml2tv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 14:57:58.559181       1 factory.go:462] Unable to schedule openshift-apiserver/apiserver-6fd77bffc6-thhm7: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 14:58:02.636546       1 scheduler.go:728] pod openshift-cluster-version/cluster-version-operator-6688cd7857-b2xcf is bound successfully on node "ip-10-0-129-31.us-west-2.compute.internal", 6 nodes evaluated, 3 nodes were found feasible.\nI0408 14:58:05.559393       1 factory.go:462] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-55d7794bc6-ml2tv: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0408 14:58:09.561516       1 factory.go:462] Unable to schedule openshift-apiserver/apiserver-6fd77bffc6-thhm7: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Apr 08 15:00:38.912 E ns/openshift-monitoring pod/node-exporter-k99g4 node/ip-10-0-129-31.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:41:20Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:20Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 15:00:38.963 E ns/openshift-cluster-node-tuning-operator pod/tuned-d8dkm node/ip-10-0-129-31.us-west-2.compute.internal container/tuned container exited with code 143 (Error): profiles changed, forcing tuned daemon reload\nI0408 14:41:53.669090   89641 tuned.go:285] starting tuned...\n2020-04-08 14:41:53,769 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 14:41:53,776 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 14:41:53,776 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 14:41:53,776 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-08 14:41:53,777 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-08 14:41:53,806 INFO     tuned.daemon.controller: starting controller\n2020-04-08 14:41:53,806 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 14:41:53,815 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 14:41:53,816 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 14:41:53,819 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 14:41:53,820 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-04-08 14:41:53,822 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-04-08 14:41:53,901 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 14:41:53,907 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0408 14:57:46.716440   89641 tuned.go:486] profile "ip-10-0-129-31.us-west-2.compute.internal" changed, tuned profile requested: openshift-node\nI0408 14:57:46.773463   89641 tuned.go:486] profile "ip-10-0-129-31.us-west-2.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0408 14:57:47.531380   89641 tuned.go:392] getting recommended profile...\nI0408 14:57:47.696255   89641 tuned.go:428] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\n
Apr 08 15:00:38.974 E ns/openshift-controller-manager pod/controller-manager-rdbhl node/ip-10-0-129-31.us-west-2.compute.internal container/controller-manager container exited with code 1 (Error): s": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "installer-artifacts": the object has been modified; please apply your changes to the latest version and try again\nE0408 14:42:50.737084       1 scheduled_image_controller.go:152] Operation cannot be fulfilled on imagestreamimports.image.openshift.io "tools": the object has been modified; please apply your changes to the latest version and try again\nW0408 14:53:51.007175       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 607; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 14:53:51.007266       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 527; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 14:53:51.007336       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 625; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 14:57:30.834348       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 19; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 14:57:30.837154       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 33; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 08 15:00:38.987 E ns/openshift-sdn pod/sdn-controller-srclk node/ip-10-0-129-31.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): I0408 14:43:55.629050       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Apr 08 15:00:39.004 E ns/openshift-multus pod/multus-vdlkl node/ip-10-0-129-31.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 15:00:39.012 E ns/openshift-multus pod/multus-admission-controller-xcjf9 node/ip-10-0-129-31.us-west-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 08 15:00:39.043 E ns/openshift-sdn pod/ovs-dggr6 node/ip-10-0-129-31.us-west-2.compute.internal container/openvswitch container exited with code 143 (Error): mgr|INFO|br0<->unix#762: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:57:57.407Z|00169|bridge|INFO|bridge br0: deleted interface veth60ee25c2 on port 91\n2020-04-08T14:57:57.561Z|00170|connmgr|INFO|br0<->unix#765: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:57:57.600Z|00171|connmgr|INFO|br0<->unix#768: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:57:57.620Z|00172|bridge|INFO|bridge br0: deleted interface veth06a6467c on port 87\n2020-04-08T14:57:58.022Z|00173|connmgr|INFO|br0<->unix#771: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:57:58.049Z|00174|connmgr|INFO|br0<->unix#774: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T14:57:58.069Z|00175|bridge|INFO|bridge br0: deleted interface vethb33dded6 on port 79\n2020-04-08T14:58:02.704Z|00176|bridge|INFO|bridge br0: added interface veth78de1c71 on port 92\n2020-04-08T14:58:02.752Z|00177|connmgr|INFO|br0<->unix#780: 5 flow_mods in the last 0 s (5 adds)\n2020-04-08T14:58:02.804Z|00178|connmgr|INFO|br0<->unix#784: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:58:02.806Z|00179|connmgr|INFO|br0<->unix#786: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-08T14:58:04.028Z|00001|netdev_linux(revalidator3)|INFO|ioctl(SIOCGIFINDEX) on veth78de1c71 device failed: No such device\n2020-04-08T14:58:04.028Z|00002|netdev_tc_offloads(revalidator3)|ERR|dump_create: failed to get ifindex for veth78de1c71: No such device\n2020-04-08T14:58:04.029Z|00180|bridge|INFO|bridge br0: deleted interface veth78de1c71 on port 92\n2020-04-08T14:58:04.040Z|00181|bridge|WARN|could not open network device veth78de1c71 (No such device)\n2020-04-08T14:58:07.625Z|00182|connmgr|INFO|br0<->unix#792: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T14:58:07.658Z|00183|connmgr|INFO|br0<->unix#795: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08 14:58:12 info: Saving flows ...\n2020-04-08T14:58:12Z|00001|jsonrpc|WARN|unix:/var/run/openvswitch/db.sock: send error: Broken pipe\n2020-04-08T14:58:12Z|00002|fatal_signal|WARN|terminating with signal 15 (Terminated)\n
Apr 08 15:00:39.066 E ns/openshift-machine-config-operator pod/machine-config-daemon-jljg6 node/ip-10-0-129-31.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 15:00:39.081 E ns/openshift-machine-config-operator pod/machine-config-server-tbk6w node/ip-10-0-129-31.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0408 14:54:03.137878       1 start.go:38] Version: machine-config-daemon-4.5.0-202004081016-2-g219a9427-dirty (219a942746ea04617729c708baed5d2c7dcb2716)\nI0408 14:54:03.140492       1 api.go:51] Launching server on :22624\nI0408 14:54:03.140626       1 api.go:51] Launching server on :22623\n
Apr 08 15:00:39.092 E ns/openshift-cluster-version pod/cluster-version-operator-6688cd7857-b2xcf node/ip-10-0-129-31.us-west-2.compute.internal container/cluster-version-operator container exited with code 255 (Error): /v1/namespaces/openshift-cloud-credential-operator/credentialsrequests/openshift-machine-api-azure\nI0408 14:58:12.714539       1 request.go:565] Throttling request took 240.909413ms, request: GET:https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:openshift-config-operator\nI0408 14:58:12.745310       1 request.go:565] Throttling request took 148.659632ms, request: GET:https://127.0.0.1:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps/openshift-apiserver-operator-config\nI0408 14:58:12.746140       1 start.go:140] Shutting down due to terminated\nI0408 14:58:12.746213       1 task_graph.go:568] Canceled worker 11\nI0408 14:58:12.746397       1 task_graph.go:568] Canceled worker 8\nI0408 14:58:12.746580       1 task_graph.go:568] Canceled worker 3\nI0408 14:58:12.746588       1 task_graph.go:568] Canceled worker 6\nI0408 14:58:12.746633       1 cvo.go:439] Started syncing cluster version "openshift-cluster-version/version" (2020-04-08 14:58:12.746627734 +0000 UTC m=+8.999131459)\nI0408 14:58:12.747074       1 task_graph.go:568] Canceled worker 1\nI0408 14:58:12.747066       1 cvo.go:468] Desired version from spec is v1.Update{Version:"", Image:"registry.svc.ci.openshift.org/ci-op-mxwilb0y/release@sha256:50b30911262079a7f1c27892c225cec4ad5f5c588ab1df8140c59f6667d89567", Force:true}\nI0408 14:58:12.746671       1 task_graph.go:568] Canceled worker 4\nI0408 14:58:12.746678       1 task_graph.go:568] Canceled worker 0\nI0408 14:58:12.746593       1 task_graph.go:568] Canceled worker 7\nI0408 14:58:12.746827       1 task_graph.go:568] Canceled worker 5\nI0408 14:58:12.746833       1 task_graph.go:568] Canceled worker 9\nI0408 14:58:12.746831       1 task_graph.go:568] Canceled worker 13\nI0408 14:58:12.746843       1 task_graph.go:568] Canceled worker 14\nI0408 14:58:12.746664       1 task_graph.go:568] Canceled worker 15\nI0408 14:58:12.747873       1 start.go:188] Stepping down as leader\nF0408 14:58:12.813967       1 start.go:148] Received shutdown signal twice, exiting\n
Apr 08 15:00:40.210 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-129-31.us-west-2.compute.internal" not ready since 2020-04-08 15:00:38 +0000 UTC because KubeletNotReady ([PLEG is not healthy: pleg has yet to be successful, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network])\nEtcdMembersDegraded: ip-10-0-129-31.us-west-2.compute.internal members are unhealthy,  members are unknown
Apr 08 15:00:42.881 E ns/openshift-etcd pod/etcd-ip-10-0-129-31.us-west-2.compute.internal node/ip-10-0-129-31.us-west-2.compute.internal container/etcd-metrics container exited with code 2 (Error): 129-31.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-129-31.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T14:34:49.762Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T14:34:49.763Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-129-31.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-129-31.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T14:34:49.765Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-08T14:34:49.766Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T14:34:49.766Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-08T14:34:49.766Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.129.31:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.129.31:9978: connect: connection refused\". Reconnecting..."}\n{"level":"warn","ts":"2020-04-08T14:34:50.766Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.129.31:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.129.31:9978: connect: connection refused\". Reconnecting..."}\n
Apr 08 15:00:44.634 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-31.us-west-2.compute.internal node/ip-10-0-129-31.us-west-2.compute.internal container/kube-apiserver container exited with code 1 (Error): 71c-8b34-a5a7421ff075"]\n  .metadata.ownerReferences: duplicate entries for key [uid="d6322dea-ca66-471c-8b34-a5a7421ff075"]\nE0408 14:57:33.245329       1 available_controller.go:418] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nE0408 14:57:33.258035       1 available_controller.go:418] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0408 14:57:40.826869       1 trace.go:116] Trace[1500476768]: "List" url:/api/v1/configmaps,user-agent:cluster-etcd-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.0.145.206 (started: 2020-04-08 14:57:39.739745106 +0000 UTC m=+1128.878732207) (total time: 1.087074124s):\nTrace[1500476768]: [1.08707301s] [1.059291011s] Writing http response done count:386\nI0408 14:58:12.573388       1 controller.go:181] Shutting down kubernetes service endpoint reconciler\nI0408 14:58:12.573345       1 genericapiserver.go:648] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-129-31.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nW0408 14:58:12.632994       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [10.0.131.146 10.0.145.206]\nI0408 14:58:12.719288       1 genericapiserver.go:648] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-129-31.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\n
Apr 08 15:00:44.634 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-31.us-west-2.compute.internal node/ip-10-0-129-31.us-west-2.compute.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0408 14:38:52.252981       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 08 15:00:44.634 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-129-31.us-west-2.compute.internal node/ip-10-0-129-31.us-west-2.compute.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 14:58:00.257452       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:58:00.257733       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 14:58:10.263735       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 14:58:10.264122       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 08 15:00:52.100 E ns/openshift-machine-config-operator pod/machine-config-daemon-jljg6 node/ip-10-0-129-31.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 15:01:10.574 E ns/openshift-machine-api pod/machine-api-controllers-7b49c4d9c-6dnwp node/ip-10-0-131-146.us-west-2.compute.internal container/machineset-controller container exited with code 1 (Error): 
Apr 08 15:01:12.726 E ns/openshift-insights pod/insights-operator-5b956674d4-dfzlz node/ip-10-0-131-146.us-west-2.compute.internal container/operator container exited with code 2 (Error): ts-operator/insights-2020-04-08-145821.tar.gz\nI0408 14:58:21.333986       1 diskrecorder.go:134] Wrote 45 records to disk in 16ms\nI0408 14:58:21.334008       1 periodic.go:151] Periodic gather config completed in 919ms\nI0408 14:58:21.334019       1 controllerstatus.go:40] name=periodic-config healthy=true reason= message=\nI0408 14:58:26.006902       1 httplog.go:90] GET /metrics: (12.672469ms) 200 [Prometheus/2.15.2 10.131.0.45:43744]\nI0408 14:58:37.159287       1 httplog.go:90] GET /metrics: (6.581488ms) 200 [Prometheus/2.15.2 10.129.2.15:58862]\nI0408 14:58:56.005666       1 httplog.go:90] GET /metrics: (11.270216ms) 200 [Prometheus/2.15.2 10.131.0.45:43744]\nI0408 14:59:07.159888       1 httplog.go:90] GET /metrics: (7.213706ms) 200 [Prometheus/2.15.2 10.129.2.15:58862]\nI0408 14:59:26.001084       1 httplog.go:90] GET /metrics: (6.924916ms) 200 [Prometheus/2.15.2 10.131.0.45:43744]\nI0408 14:59:37.159199       1 httplog.go:90] GET /metrics: (6.422974ms) 200 [Prometheus/2.15.2 10.129.2.15:58862]\nI0408 14:59:56.000693       1 httplog.go:90] GET /metrics: (6.464657ms) 200 [Prometheus/2.15.2 10.131.0.45:43744]\nI0408 15:00:07.161798       1 httplog.go:90] GET /metrics: (8.986235ms) 200 [Prometheus/2.15.2 10.129.2.15:58862]\nI0408 15:00:21.289132       1 status.go:298] The operator is healthy\nI0408 15:00:21.300107       1 configobserver.go:68] Refreshing configuration from cluster pull secret\nI0408 15:00:21.303842       1 configobserver.go:93] Found cloud.openshift.com token\nI0408 15:00:21.303866       1 configobserver.go:110] Refreshing configuration from cluster secret\nI0408 15:00:26.002281       1 httplog.go:90] GET /metrics: (6.215871ms) 200 [Prometheus/2.15.2 10.128.2.19:48404]\nI0408 15:00:37.159137       1 httplog.go:90] GET /metrics: (6.422086ms) 200 [Prometheus/2.15.2 10.129.2.15:58862]\nI0408 15:00:56.003442       1 httplog.go:90] GET /metrics: (12.586994ms) 200 [Prometheus/2.15.2 10.128.2.19:48404]\nI0408 15:01:07.160960       1 httplog.go:90] GET /metrics: (8.138218ms) 200 [Prometheus/2.15.2 10.129.2.15:58862]\n
Apr 08 15:01:34.023 E ns/openshift-console pod/console-69bb968c96-hx5qv node/ip-10-0-131-146.us-west-2.compute.internal container/console container exited with code 2 (Error): 2020-04-08T14:53:59Z cmd/main: cookies are secure!\n2020-04-08T14:53:59Z cmd/main: Binding to [::]:8443...\n2020-04-08T14:53:59Z cmd/main: using TLS\n
Apr 08 15:01:41.422 E kube-apiserver failed contacting the API: Get https://api.ci-op-mxwilb0y-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=51022&timeout=9m34s&timeoutSeconds=574&watch=true: dial tcp 34.212.217.72:6443: connect: connection refused
Apr 08 15:01:41.422 E kube-apiserver failed contacting the API: Get https://api.ci-op-mxwilb0y-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=51006&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp 34.212.217.72:6443: connect: connection refused
Apr 08 15:01:47.319 E kube-apiserver Kube API started failing: Get https://api.ci-op-mxwilb0y-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 08 15:02:55.975 E ns/openshift-cluster-node-tuning-operator pod/tuned-vfn95 node/ip-10-0-136-206.us-west-2.compute.internal container/tuned container exited with code 143 (Error):  profile "ip-10-0-136-206.us-west-2.compute.internal" added, tuned profile requested: openshift-node\nI0408 14:41:15.120771   73412 tuned.go:169] disabling system tuned...\nI0408 14:41:15.125464   73412 tuned.go:175] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0408 14:41:16.110668   73412 tuned.go:392] getting recommended profile...\nI0408 14:41:16.228996   73412 tuned.go:419] active profile () != recommended profile (openshift-node)\nI0408 14:41:16.229083   73412 tuned.go:434] tuned daemon profiles changed, forcing tuned daemon reload\nI0408 14:41:16.229129   73412 tuned.go:285] starting tuned...\n2020-04-08 14:41:16,357 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-04-08 14:41:16,364 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 14:41:16,364 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 14:41:16,365 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-04-08 14:41:16,366 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-04-08 14:41:16,398 INFO     tuned.daemon.controller: starting controller\n2020-04-08 14:41:16,398 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 14:41:16,409 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 14:41:16,410 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 14:41:16,413 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 14:41:16,414 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-04-08 14:41:16,416 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-04-08 14:41:16,528 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 14:41:16,542 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Apr 08 15:02:55.987 E ns/openshift-monitoring pod/node-exporter-7f9w4 node/ip-10-0-136-206.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:42:25Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:42:25Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 15:02:56.045 E ns/openshift-sdn pod/ovs-vgbqx node/ip-10-0-136-206.us-west-2.compute.internal container/openvswitch container exited with code 1 (Error): the last 0 s (4 deletes)\n2020-04-08T15:00:03.314Z|00145|bridge|INFO|bridge br0: deleted interface veth7628035f on port 26\n2020-04-08T15:00:03.373Z|00146|connmgr|INFO|br0<->unix#892: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:00:03.419Z|00147|connmgr|INFO|br0<->unix#895: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:00:03.446Z|00148|bridge|INFO|bridge br0: deleted interface veth53ef772c on port 32\n2020-04-08T15:00:03.512Z|00149|connmgr|INFO|br0<->unix#898: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:00:03.592Z|00150|connmgr|INFO|br0<->unix#901: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:00:03.640Z|00151|bridge|INFO|bridge br0: deleted interface vethc3fe4722 on port 24\n2020-04-08T15:00:03.692Z|00152|connmgr|INFO|br0<->unix#904: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:00:03.733Z|00153|connmgr|INFO|br0<->unix#907: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:00:03.760Z|00154|bridge|INFO|bridge br0: deleted interface vethf3e6a9eb on port 46\n2020-04-08T15:00:03.811Z|00155|connmgr|INFO|br0<->unix#910: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:00:03.850Z|00156|connmgr|INFO|br0<->unix#913: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:00:03.901Z|00157|bridge|INFO|bridge br0: deleted interface veth2824da1f on port 45\n2020-04-08T15:00:46.609Z|00158|connmgr|INFO|br0<->unix#949: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:00:46.641Z|00159|connmgr|INFO|br0<->unix#952: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:00:46.670Z|00160|bridge|INFO|bridge br0: deleted interface veth8f6ad57d on port 22\n2020-04-08T15:00:48.021Z|00161|connmgr|INFO|br0<->unix#956: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:00:48.054Z|00162|connmgr|INFO|br0<->unix#959: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:00:48.076Z|00163|bridge|INFO|bridge br0: deleted interface veth68969d6a on port 23\n2020-04-08 15:01:11 info: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Apr 08 15:02:56.060 E ns/openshift-multus pod/multus-x4bb2 node/ip-10-0-136-206.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 15:02:56.073 E ns/openshift-machine-config-operator pod/machine-config-daemon-jccw8 node/ip-10-0-136-206.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 15:03:07.357 E ns/openshift-machine-config-operator pod/machine-config-daemon-jccw8 node/ip-10-0-136-206.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 15:04:02.347 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/kube-scheduler-cert-syncer container exited with code 2 (Error): 1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 15:01:20.981866       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 15:01:20.981976       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 15:01:22.996736       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 15:01:22.996760       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 15:01:25.011824       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 15:01:25.011943       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 15:01:27.021076       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 15:01:27.021099       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 15:01:29.029999       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 15:01:29.030111       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 15:01:31.043754       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 15:01:31.043774       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 15:01:33.063026       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 15:01:33.063049       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 15:01:35.075692       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 15:01:35.075767       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 15:01:37.080879       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 15:01:37.081008       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nI0408 15:01:39.091563       1 certsync_controller.go:65] Syncing configmaps: []\nI0408 15:01:39.091702       1 certsync_controller.go:162] Syncing secrets: [{kube-scheduler-client-cert-key false}]\n
Apr 08 15:04:02.347 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/kube-scheduler container exited with code 2 (Error): 178] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-04-08 13:59:46 +0000 UTC to 2030-04-06 13:59:46 +0000 UTC (now=2020-04-08 14:38:17.975402595 +0000 UTC))\nI0408 14:38:17.975418       1 tlsconfig.go:178] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1586355185" [] issuer="kubelet-signer" (2020-04-08 14:13:04 +0000 UTC to 2020-04-09 13:59:52 +0000 UTC (now=2020-04-08 14:38:17.975413323 +0000 UTC))\nI0408 14:38:17.975434       1 tlsconfig.go:178] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-04-08 13:59:50 +0000 UTC to 2020-04-09 13:59:50 +0000 UTC (now=2020-04-08 14:38:17.97542827 +0000 UTC))\nI0408 14:38:17.975655       1 tlsconfig.go:200] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1586355188" (2020-04-08 14:13:18 +0000 UTC to 2022-04-08 14:13:19 +0000 UTC (now=2020-04-08 14:38:17.975646384 +0000 UTC))\nI0408 14:38:17.975825       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1586356697" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1586356697" (2020-04-08 13:38:17 +0000 UTC to 2021-04-08 13:38:17 +0000 UTC (now=2020-04-08 14:38:17.975815942 +0000 UTC))\n
Apr 08 15:04:02.392 E ns/openshift-controller-manager pod/controller-manager-7h9md node/ip-10-0-131-146.us-west-2.compute.internal container/controller-manager container exited with code 1 (Error): 59:19.453785       1 docker_registry_service.go:296] Updating registry URLs from map[172.30.135.42:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.135.42:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0408 14:59:19.455649       1 deleted_dockercfg_secrets.go:74] caches synced\nI0408 14:59:19.456188       1 deleted_token_secrets.go:69] caches synced\nI0408 14:59:19.458406       1 build_controller.go:474] Starting build controller\nI0408 14:59:19.458474       1 build_controller.go:476] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nW0408 15:01:06.457388       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 303; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 15:01:06.459779       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 335; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 15:01:06.459903       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 327; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 15:01:06.460000       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 315; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 08 15:04:02.416 E ns/openshift-monitoring pod/node-exporter-wmxrn node/ip-10-0-131-146.us-west-2.compute.internal container/node-exporter container exited with code 143 (Error): -08T14:41:54Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-04-08T14:41:54Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Apr 08 15:04:02.428 E ns/openshift-cluster-node-tuning-operator pod/tuned-4zf76 node/ip-10-0-131-146.us-west-2.compute.internal container/tuned container exited with code 143 (Error): ed.daemon.application: dynamic tuning is globally disabled\n2020-04-08 14:42:04,671 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-04-08 14:42:04,672 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-04-08 14:42:04,672 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-04-08 14:42:04,673 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-04-08 14:42:04,706 INFO     tuned.daemon.controller: starting controller\n2020-04-08 14:42:04,706 INFO     tuned.daemon.daemon: starting tuning\n2020-04-08 14:42:04,718 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-04-08 14:42:04,719 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-04-08 14:42:04,722 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-04-08 14:42:04,723 INFO     tuned.plugins.base: instance disk: assigning devices dm-0\n2020-04-08 14:42:04,725 INFO     tuned.plugins.base: instance net: assigning devices ens5\n2020-04-08 14:42:04,802 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-04-08 14:42:04,811 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0408 14:57:46.737159   87819 tuned.go:486] profile "ip-10-0-131-146.us-west-2.compute.internal" changed, tuned profile requested: openshift-node\nI0408 14:57:46.777977   87819 tuned.go:486] profile "ip-10-0-131-146.us-west-2.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0408 14:57:47.445943   87819 tuned.go:392] getting recommended profile...\nI0408 14:57:47.554873   87819 tuned.go:428] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0408 15:01:40.956048   87819 tuned.go:114] received signal: terminated\nI0408 15:01:40.956184   87819 tuned.go:326] sending TERM to PID 87877\n
Apr 08 15:04:02.451 E ns/openshift-sdn pod/sdn-controller-ffm5q node/ip-10-0-131-146.us-west-2.compute.internal container/sdn-controller container exited with code 2 (Error): I0408 14:44:13.598293       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0408 14:55:44.457340       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"9e21f329-08cf-4c18-86c1-3e7be910afa4", ResourceVersion:"44165", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721951899, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-131-146\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-04-08T14:55:44Z\",\"renewTime\":\"2020-04-08T14:55:44Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"openshift-sdn-controller", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0005c0ee0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0005c0f00)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-131-146 became leader'\nI0408 14:55:44.457421       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0408 14:55:44.460695       1 master.go:51] Initializing SDN master\nI0408 14:55:44.475261       1 network_controller.go:61] Started OpenShift Network Controller\n
Apr 08 15:04:02.466 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): :54] Starting #1 worker of CertRotationController controller ...\nI0408 14:39:39.253669       1 shared_informer.go:230] Caches are synced for CertRotationController \nI0408 14:39:39.254955       1 base_controller.go:54] Starting #1 worker of CertRotationController controller ...\nI0408 14:39:39.253956       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-mxwilb0y-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0408 14:39:39.253973       1 shared_informer.go:223] Waiting for caches to sync for CertRotationController\nI0408 14:39:39.254983       1 shared_informer.go:230] Caches are synced for CertRotationController \nI0408 14:39:39.254989       1 base_controller.go:54] Starting #1 worker of CertRotationController controller ...\nI0408 14:39:39.253825       1 shared_informer.go:223] Waiting for caches to sync for CertRotationController\nI0408 14:39:39.256167       1 shared_informer.go:230] Caches are synced for CertRotationController \nI0408 14:39:39.256203       1 base_controller.go:54] Starting #1 worker of CertRotationController controller ...\nI0408 14:49:39.222733       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0408 14:49:39.226841       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-mxwilb0y-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0408 14:54:26.454642       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-mxwilb0y-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0408 14:54:26.496452       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0408 15:01:40.927501       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\n
Apr 08 15:04:02.466 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/kube-apiserver container exited with code 1 (Error): ructuredmerge.go:103] [SHOULD NOT HAPPEN] failed to create typed new object of type /v1, Kind=Service: errors:\n  .metadata.ownerReferences: duplicate entries for key [uid="d6322dea-ca66-471c-8b34-a5a7421ff075"]\n  .metadata.ownerReferences: duplicate entries for key [uid="d6322dea-ca66-471c-8b34-a5a7421ff075"]\n  .metadata.ownerReferences: duplicate entries for key [uid="d6322dea-ca66-471c-8b34-a5a7421ff075"]\n  .metadata.ownerReferences: duplicate entries for key [uid="d6322dea-ca66-471c-8b34-a5a7421ff075"]\n  .metadata.ownerReferences: duplicate entries for key [uid="d6322dea-ca66-471c-8b34-a5a7421ff075"]\n  .metadata.ownerReferences: duplicate entries for key [uid="d6322dea-ca66-471c-8b34-a5a7421ff075"]\n  .metadata.ownerReferences: duplicate entries for key [uid="d6322dea-ca66-471c-8b34-a5a7421ff075"]\n  .metadata.ownerReferences: duplicate entries for key [uid="d6322dea-ca66-471c-8b34-a5a7421ff075"]\nE0408 15:01:16.220071       1 available_controller.go:418] v1.packages.operators.coreos.com failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1.packages.operators.coreos.com": the object has been modified; please apply your changes to the latest version and try again\nI0408 15:01:22.438797       1 trace.go:116] Trace[1085721864]: "List" url:/api/v1/configmaps,user-agent:service-ca-operator/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.0.145.206 (started: 2020-04-08 15:01:21.926298258 +0000 UTC m=+1478.635627728) (total time: 512.475256ms):\nTrace[1085721864]: [512.474404ms] [512.163826ms] Writing http response done count:385\nI0408 15:01:40.926319       1 genericapiserver.go:648] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-131-146.us-west-2.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0408 15:01:40.926569       1 controller.go:181] Shutting down kubernetes service endpoint reconciler\n
Apr 08 15:04:02.466 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/kube-apiserver-insecure-readyz container exited with code 2 (Error): I0408 14:36:44.447056       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Apr 08 15:04:02.466 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 15:01:30.614674       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 15:01:30.614968       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0408 15:01:40.624906       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 15:01:40.625187       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Apr 08 15:04:02.475 E ns/openshift-multus pod/multus-admission-controller-k8ktb node/ip-10-0-131-146.us-west-2.compute.internal container/multus-admission-controller container exited with code 255 (Error): 
Apr 08 15:04:02.500 E ns/openshift-sdn pod/ovs-v6s5t node/ip-10-0-131-146.us-west-2.compute.internal container/openvswitch container exited with code 1 (Error): |INFO|br0<->unix#1061: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-04-08T15:01:21.697Z|00223|connmgr|INFO|br0<->unix#1063: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:01:22.052Z|00224|connmgr|INFO|br0<->unix#1066: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:01:22.108Z|00225|connmgr|INFO|br0<->unix#1069: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:01:22.187Z|00226|bridge|INFO|bridge br0: deleted interface veth2f00ee9d on port 95\n2020-04-08T15:01:23.997Z|00227|connmgr|INFO|br0<->unix#1076: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:01:24.047Z|00228|connmgr|INFO|br0<->unix#1079: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:01:24.084Z|00229|bridge|INFO|bridge br0: deleted interface veth618a6039 on port 96\n2020-04-08T15:01:25.009Z|00230|connmgr|INFO|br0<->unix#1082: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:01:25.043Z|00231|connmgr|INFO|br0<->unix#1085: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:01:25.068Z|00232|bridge|INFO|bridge br0: deleted interface vethcfebc0a3 on port 97\n2020-04-08T15:01:32.069Z|00233|connmgr|INFO|br0<->unix#1093: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:01:32.094Z|00234|connmgr|INFO|br0<->unix#1096: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:01:32.116Z|00235|bridge|INFO|bridge br0: deleted interface vethcd28ff0a on port 92\n2020-04-08T15:01:33.592Z|00236|connmgr|INFO|br0<->unix#1100: 2 flow_mods in the last 0 s (2 deletes)\n2020-04-08T15:01:33.622Z|00237|connmgr|INFO|br0<->unix#1103: 4 flow_mods in the last 0 s (4 deletes)\n2020-04-08T15:01:33.642Z|00238|bridge|INFO|bridge br0: deleted interface veth28ee9e65 on port 79\n2020-04-08 15:01:41 info: Saving flows ...\n2020-04-08T15:01:41Z|00001|jsonrpc|WARN|unix:/var/run/openvswitch/db.sock: receive error: Connection reset by peer\n2020-04-08T15:01:41Z|00002|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection dropped (Connection reset by peer)\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection reset by peer)\n
Apr 08 15:04:02.540 E ns/openshift-multus pod/multus-b8pt5 node/ip-10-0-131-146.us-west-2.compute.internal container/kube-multus container exited with code 143 (Error): 
Apr 08 15:04:02.556 E ns/openshift-machine-config-operator pod/machine-config-server-w54fl node/ip-10-0-131-146.us-west-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0408 14:53:48.873882       1 start.go:38] Version: machine-config-daemon-4.5.0-202004081016-2-g219a9427-dirty (219a942746ea04617729c708baed5d2c7dcb2716)\nI0408 14:53:48.874686       1 api.go:51] Launching server on :22624\nI0408 14:53:48.874747       1 api.go:51] Launching server on :22623\n
Apr 08 15:04:02.569 E ns/openshift-machine-config-operator pod/machine-config-daemon-4vj85 node/ip-10-0-131-146.us-west-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 08 15:04:06.308 E ns/openshift-etcd pod/etcd-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/etcd-metrics container exited with code 2 (Error): us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-131-146.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T14:36:44.600Z","caller":"etcdmain/grpc_proxy.go:320","msg":"listening for gRPC proxy client requests","address":"127.0.0.1:9977"}\n{"level":"info","ts":"2020-04-08T14:36:44.601Z","caller":"etcdmain/grpc_proxy.go:290","msg":"gRPC proxy client TLS","tls-info":"cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-131-146.us-west-2.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-131-146.us-west-2.compute.internal.key, trusted-ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, client-cert-auth = false, crl-file = "}\n{"level":"info","ts":"2020-04-08T14:36:44.611Z","caller":"etcdmain/grpc_proxy.go:456","msg":"gRPC proxy listening for metrics","address":"https://0.0.0.0:9979"}\n{"level":"info","ts":"2020-04-08T14:36:44.611Z","caller":"etcdmain/grpc_proxy.go:218","msg":"started gRPC proxy","address":"127.0.0.1:9977"}\n{"level":"warn","ts":"2020-04-08T14:36:44.612Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.131.146:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.131.146:9978: connect: connection refused\". Reconnecting..."}\n{"level":"info","ts":"2020-04-08T14:36:44.617Z","caller":"etcdmain/grpc_proxy.go:208","msg":"gRPC proxy server metrics URL serving"}\n{"level":"warn","ts":"2020-04-08T14:36:45.618Z","caller":"grpclog/grpclog.go:60","msg":"grpc: addrConn.createTransport failed to connect to {https://10.0.131.146:9978 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial tcp 10.0.131.146:9978: connect: connection refused\". Reconnecting..."}\n
Apr 08 15:04:07.367 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/cluster-policy-controller container exited with code 1 (Error): :55:38.554013       1 shared_informer.go:204] Caches are synced for resource quota \nW0408 14:57:30.835638       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 287; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 14:57:30.836177       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 303; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 15:01:06.457022       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 265; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 15:01:06.466574       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 413; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 15:01:06.466713       1 reflector.go:326] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 411; INTERNAL_ERROR") has prevented the request from succeeding\nW0408 15:01:06.466844       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 301; INTERNAL_ERROR") has prevented the request from succeeding\n
Apr 08 15:04:07.367 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/kube-controller-manager-cert-syncer container exited with code 2 (Error): 7849       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 15:01:10.568211       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 15:01:13.297490       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 15:01:13.297790       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 15:01:20.580285       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 15:01:20.580536       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 15:01:23.338632       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 15:01:23.338956       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 15:01:30.587906       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 15:01:30.588194       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 15:01:33.345651       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 15:01:33.345904       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nI0408 15:01:40.598342       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0408 15:01:40.598574       1 certsync_controller.go:162] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\n
Apr 08 15:04:07.367 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-131-146.us-west-2.compute.internal node/ip-10-0-131-146.us-west-2.compute.internal container/kube-controller-manager container exited with code 2 (Error): ] loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-04-08 13:59:46 +0000 UTC to 2030-04-06 13:59:46 +0000 UTC (now=2020-04-08 14:38:13.807397026 +0000 UTC))\nI0408 14:38:13.807411       1 tlsconfig.go:178] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-04-08 13:59:50 +0000 UTC to 2020-04-09 13:59:50 +0000 UTC (now=2020-04-08 14:38:13.807407176 +0000 UTC))\nI0408 14:38:13.807583       1 tlsconfig.go:200] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1586355188" (2020-04-08 14:13:20 +0000 UTC to 2022-04-08 14:13:21 +0000 UTC (now=2020-04-08 14:38:13.80757433 +0000 UTC))\nI0408 14:38:13.807752       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1586356693" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1586356693" (2020-04-08 13:38:13 +0000 UTC to 2021-04-08 13:38:13 +0000 UTC (now=2020-04-08 14:38:13.80774522 +0000 UTC))\nI0408 14:38:13.807778       1 secure_serving.go:178] Serving securely on [::]:10257\nI0408 14:38:13.807809       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0408 14:38:13.807852       1 tlsconfig.go:240] Starting DynamicServingCertificateController\n
Apr 08 15:04:13.176 E ns/openshift-machine-config-operator pod/machine-config-daemon-4vj85 node/ip-10-0-131-146.us-west-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
Apr 08 15:04:20.078 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-146.us-west-2.compute.internal" not ready since 2020-04-08 15:04:01 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 08 15:04:20.086 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-146.us-west-2.compute.internal" not ready since 2020-04-08 15:04:01 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 08 15:04:20.090 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-146.us-west-2.compute.internal" not ready since 2020-04-08 15:04:01 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Apr 08 15:04:20.090 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-131-146.us-west-2.compute.internal" not ready since 2020-04-08 15:04:01 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nEtcdMembersDegraded: ip-10-0-131-146.us-west-2.compute.internal members are unhealthy,  members are unknown
Apr 08 15:04:39.401 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: ip-10-0-131-146.us-west-2.compute.internal members are unhealthy,  members are unknown