ResultSUCCESS
Tests 3 failed / 20 succeeded
Started2020-02-18 15:00
Elapsed1h58m
Work namespaceci-op-v4nyst74
Refs release-4.3:3ce21b38
298:666618c0
pod5ef16c49-525f-11ea-a7f6-0a58ac107589
repoopenshift/cluster-api-provider-aws
revision1

Test Failures


Cluster upgrade control-plane-upgrade 36m25s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\scontrol\-plane\-upgrade$'
API was unreachable during upgrade for at least 1m26s:

Feb 18 16:21:48.802 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 18 16:21:49.801 - 10s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:22:01.399 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 16:33:00.802 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 18 16:33:01.801 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:33:15.821 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 16:33:18.314 I openshift-apiserver OpenShift API stopped responding to GET requests: the server is currently unable to handle the request (get imagestreams.image.openshift.io missing)
Feb 18 16:33:18.801 - 1s    E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:33:21.418 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 16:35:42.834 E kube-apiserver Kube API started failing: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 3.224.145.150:6443: connect: connection refused
Feb 18 16:35:43.801 E kube-apiserver Kube API is not responding to GET requests
Feb 18 16:35:43.827 I kube-apiserver Kube API started responding to GET requests
Feb 18 16:35:57.802 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 18 16:35:58.801 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:36:12.823 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 16:36:31.802 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 18 16:36:31.826 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 16:36:47.802 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 18 16:36:48.801 - 14s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:37:02.821 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 16:37:19.802 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 18 16:37:20.801 - 13s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:37:34.820 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 16:37:50.802 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Feb 18 16:37:50.820 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 16:38:46.588 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: unexpected EOF
Feb 18 16:38:46.801 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:39:01.837 I openshift-apiserver OpenShift API started responding to GET requests
Feb 18 16:39:17.802 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Feb 18 16:39:17.820 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1582044314.xml

Filter through log files


Cluster upgrade k8s-service-upgrade 37m55s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sk8s\-service\-upgrade$'
Service was unreachable during upgrade for at least 24s:

Feb 18 16:21:43.160 E ns/e2e-k8s-service-upgrade-2037 svc/service-test Service stopped responding to GET requests on reused connections
Feb 18 16:21:44.159 E ns/e2e-k8s-service-upgrade-2037 svc/service-test Service is not responding to GET requests on reused connections
Feb 18 16:21:44.194 I ns/e2e-k8s-service-upgrade-2037 svc/service-test Service started responding to GET requests on reused connections
Feb 18 16:21:54.160 E ns/e2e-k8s-service-upgrade-2037 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 16:21:55.159 - 10s   E ns/e2e-k8s-service-upgrade-2037 svc/service-test Service is not responding to GET requests over new connections
Feb 18 16:22:06.152 I ns/e2e-k8s-service-upgrade-2037 svc/service-test Service started responding to GET requests over new connections
Feb 18 16:22:33.160 E ns/e2e-k8s-service-upgrade-2037 svc/service-test Service stopped responding to GET requests on reused connections
Feb 18 16:22:34.017 I ns/e2e-k8s-service-upgrade-2037 svc/service-test Service started responding to GET requests on reused connections
Feb 18 16:35:53.160 E ns/e2e-k8s-service-upgrade-2037 svc/service-test Service stopped responding to GET requests on reused connections
Feb 18 16:35:53.195 I ns/e2e-k8s-service-upgrade-2037 svc/service-test Service started responding to GET requests on reused connections
Feb 18 16:36:29.017 E ns/e2e-k8s-service-upgrade-2037 svc/service-test Service stopped responding to GET requests over new connections
Feb 18 16:36:29.159 - 8s    E ns/e2e-k8s-service-upgrade-2037 svc/service-test Service is not responding to GET requests over new connections
Feb 18 16:36:37.610 I ns/e2e-k8s-service-upgrade-2037 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1582044314.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 37m59s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
214 error level events were detected during this test run:

Feb 18 16:12:27.895 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-7f946fd84b-xlt87 node/ip-10-0-157-11.ec2.internal container=kube-controller-manager-operator container exited with code 255 (Error): informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 5921 (15270)\nW0218 16:04:44.817354       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.RoleBinding ended with: too old resource version: 12031 (14167)\nW0218 16:04:44.817503       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14600 (14717)\nW0218 16:04:44.817573       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Role ended with: too old resource version: 10375 (14167)\nW0218 16:04:44.817645       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 9704 (14164)\nW0218 16:04:44.817781       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 6230 (14275)\nW0218 16:10:08.942397       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18288 (18365)\nW0218 16:10:14.598186       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18365 (18385)\nW0218 16:10:47.613356       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18385 (18561)\nW0218 16:10:58.556543       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18561 (18643)\nI0218 16:12:27.059896       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 16:12:27.059985       1 builder.go:217] server exited\nI0218 16:12:27.069581       1 secure_serving.go:167] Stopped listening on [::]:8443\n
Feb 18 16:12:36.922 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-9579fdb4d-cf6dg node/ip-10-0-157-11.ec2.internal container=kube-scheduler-operator-container container exited with code 255 (Error): (15520)\nW0218 16:06:36.920122       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 15434 (16411)\nW0218 16:06:36.920183       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: too old resource version: 14821 (15515)\nW0218 16:06:36.931221       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Role ended with: too old resource version: 12030 (15520)\nW0218 16:06:36.931288       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: too old resource version: 15512 (16411)\nW0218 16:06:36.959511       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.FeatureGate ended with: too old resource version: 7592 (15592)\nW0218 16:06:36.959670       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 9704 (15515)\nW0218 16:10:08.901209       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18288 (18365)\nW0218 16:10:14.581912       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18365 (18385)\nW0218 16:10:47.633851       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18385 (18561)\nW0218 16:10:58.629641       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 18561 (18643)\nI0218 16:12:36.137396       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 16:12:36.137527       1 leaderelection.go:66] leaderelection lost\n
Feb 18 16:14:21.279 E ns/openshift-machine-api pod/machine-api-operator-7bd5755d4c-9plt9 node/ip-10-0-157-11.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 18 16:17:15.609 E ns/openshift-apiserver pod/apiserver-xzfdr node/ip-10-0-134-245.ec2.internal container=openshift-apiserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:17:21.674 E ns/openshift-machine-api pod/machine-api-controllers-68989fdc9c-gdvtv node/ip-10-0-134-245.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 18 16:17:34.734 E ns/openshift-ingress-operator pod/ingress-operator-5df95bc486-h26v2 node/ip-10-0-134-245.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:17:34.734 E ns/openshift-ingress-operator pod/ingress-operator-5df95bc486-h26v2 node/ip-10-0-134-245.ec2.internal container=ingress-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:17:43.091 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-b5ddd97d-68l5c node/ip-10-0-141-138.ec2.internal container=cluster-node-tuning-operator container exited with code 255 (Error): ned openshift-cluster-node-tuning-operator/default\nI0218 16:07:48.465690       1 status.go:25] syncOperatorStatus()\nI0218 16:07:48.475640       1 tuned_controller.go:188] syncServiceAccount()\nI0218 16:07:48.475763       1 tuned_controller.go:218] syncClusterRole()\nI0218 16:07:48.514973       1 tuned_controller.go:251] syncClusterRoleBinding()\nI0218 16:07:48.556607       1 tuned_controller.go:284] syncClusterConfigMap()\nI0218 16:07:48.560218       1 tuned_controller.go:284] syncClusterConfigMap()\nI0218 16:07:48.564945       1 tuned_controller.go:323] syncDaemonSet()\nW0218 16:14:11.388838       1 reflector.go:299] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.ClusterRole ended with: too old resource version: 15520 (16717)\nW0218 16:14:11.389067       1 reflector.go:299] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.ClusterRoleBinding ended with: too old resource version: 15610 (16718)\nW0218 16:14:11.405702       1 reflector.go:299] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.ConfigMap ended with: too old resource version: 16684 (19518)\nW0218 16:14:11.475576       1 reflector.go:299] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: watch of *v1.Tuned ended with: too old resource version: 15591 (19803)\nI0218 16:14:12.478988       1 tuned_controller.go:425] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0218 16:14:12.479111       1 status.go:25] syncOperatorStatus()\nI0218 16:14:12.509150       1 tuned_controller.go:188] syncServiceAccount()\nI0218 16:14:12.509312       1 tuned_controller.go:218] syncClusterRole()\nI0218 16:14:12.624309       1 tuned_controller.go:251] syncClusterRoleBinding()\nI0218 16:14:12.697255       1 tuned_controller.go:284] syncClusterConfigMap()\nI0218 16:14:12.703890       1 tuned_controller.go:284] syncClusterConfigMap()\nI0218 16:14:12.709587       1 tuned_controller.go:323] syncDaemonSet()\nF0218 16:17:42.143594       1 main.go:82] <nil>\n
Feb 18 16:17:55.057 E ns/openshift-cluster-node-tuning-operator pod/tuned-6r4fg node/ip-10-0-146-83.ec2.internal container=tuned container exited with code 143 (Error): 38] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:06:25.002437    2610 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-696455c955-xxld4) labels changed node wide: true\nI0218 16:06:29.865707    2610 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:06:29.867723    2610 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:06:29.980698    2610 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:06:36.517755    2610 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0218 16:06:36.520654    2610 openshift-tuned.go:881] Pod event watch channel closed.\nI0218 16:06:36.520754    2610 openshift-tuned.go:883] Increasing resyncPeriod to 126\nI0218 16:08:42.520997    2610 openshift-tuned.go:209] Extracting tuned profiles\nI0218 16:08:42.523671    2610 openshift-tuned.go:739] Resync period to pull node/pod labels: 126 [s]\nI0218 16:08:42.538708    2610 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-5qvhd) labels changed node wide: true\nI0218 16:08:47.535840    2610 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:08:47.540865    2610 openshift-tuned.go:390] Dumping labels to /var/lib/tuned/ocp-node-labels.cfg\nI0218 16:08:47.542290    2610 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:08:47.675174    2610 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:10:48.532070    2610 openshift-tuned.go:852] Lowering resyncPeriod to 63\nI0218 16:16:06.521351    2610 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0218 16:16:06.528935    2610 openshift-tuned.go:881] Pod event watch channel closed.\nI0218 16:16:06.528957    2610 openshift-tuned.go:883] Increasing resyncPeriod to 126\n
Feb 18 16:18:00.160 E ns/openshift-controller-manager pod/controller-manager-lnkkv node/ip-10-0-141-138.ec2.internal container=controller-manager container exited with code 137 (OOMKilled): 
Feb 18 16:18:14.865 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-7457879c86-qkg5c node/ip-10-0-157-11.ec2.internal container=operator container exited with code 255 (Error): TransitionTime":"2020-02-18T15:53:37Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-18T16:18:09Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-18T15:56:09Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-18T15:53:37Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0218 16:18:09.770732       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-controller-manager-operator", Name:"openshift-controller-manager-operator", UID:"181901f5-59aa-4723-ba96-a69580a8b06f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("")\nI0218 16:18:09.957220       1 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134\nI0218 16:18:09.957567       1 reflector.go:158] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134\nI0218 16:18:09.957766       1 reflector.go:158] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134\nI0218 16:18:10.024610       1 reflector.go:158] Listing and watching *v1.Build from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0218 16:18:10.024932       1 reflector.go:158] Listing and watching *v1.Image from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0218 16:18:10.025244       1 reflector.go:158] Listing and watching *v1.Network from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0218 16:18:10.079056       1 reflector.go:158] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:134\nI0218 16:18:10.304122       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 16:18:10.304260       1 leaderelection.go:66] leaderelection lost\n
Feb 18 16:18:15.392 E ns/openshift-insights pod/insights-operator-6fd688684c-dnmqb node/ip-10-0-157-11.ec2.internal container=operator container exited with code 2 (Error): 0.245997       1 diskrecorder.go:63] Recording config/network with fingerprint=\nI0218 16:15:30.250840       1 diskrecorder.go:63] Recording config/authentication with fingerprint=\nI0218 16:15:30.254255       1 diskrecorder.go:63] Recording config/featuregate with fingerprint=\nI0218 16:15:30.257611       1 diskrecorder.go:63] Recording config/oauth with fingerprint=\nI0218 16:15:30.261148       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0218 16:15:30.264490       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0218 16:15:30.264704       1 diskrecorder.go:170] Writing 37 records to /var/lib/insights-operator/insights-2020-02-18-161530.tar.gz\nI0218 16:15:30.267688       1 diskrecorder.go:134] Wrote 37 records to disk in 3ms\nI0218 16:15:30.267718       1 periodic.go:151] Periodic gather config completed in 79ms\nI0218 16:15:35.791538       1 httplog.go:90] GET /metrics: (6.832299ms) 200 [Prometheus/2.14.0 10.129.2.10:53256]\nI0218 16:15:43.997266       1 httplog.go:90] GET /metrics: (1.600673ms) 200 [Prometheus/2.14.0 10.131.0.18:43182]\nI0218 16:16:05.792674       1 httplog.go:90] GET /metrics: (7.911901ms) 200 [Prometheus/2.14.0 10.129.2.10:53256]\nI0218 16:16:13.996746       1 httplog.go:90] GET /metrics: (1.504543ms) 200 [Prometheus/2.14.0 10.131.0.18:43182]\nI0218 16:16:23.729663       1 status.go:298] The operator is healthy\nI0218 16:16:23.729729       1 status.go:373] No status update necessary, objects are identical\nI0218 16:16:35.795028       1 httplog.go:90] GET /metrics: (8.897347ms) 200 [Prometheus/2.14.0 10.129.2.10:53256]\nI0218 16:16:43.996867       1 httplog.go:90] GET /metrics: (1.529372ms) 200 [Prometheus/2.14.0 10.131.0.18:43182]\nI0218 16:17:05.804052       1 httplog.go:90] GET /metrics: (19.082332ms) 200 [Prometheus/2.14.0 10.129.2.10:53256]\nI0218 16:17:13.996711       1 httplog.go:90] GET /metrics: (1.508179ms) 200 [Prometheus/2.14.0 10.131.0.18:43182]\nI0218 16:17:35.795726       1 httplog.go:90] GET /metrics: (7.745547ms) 200 [Prometheus/2.14.0 10.129.2.10:53256]\n
Feb 18 16:18:20.059 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-157-11.ec2.internal node/ip-10-0-157-11.ec2.internal container=kube-controller-manager-8 container exited with code 255 (Error): istentVolume ended with: too old resource version: 16701 (22271)\nW0218 16:18:18.546591       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 22173 (22292)\nE0218 16:18:18.546634       1 reflector.go:280] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: Failed to watch *v1.ImageStream: the server is currently unable to handle the request (get imagestreams.image.openshift.io)\nE0218 16:18:18.546677       1 reflector.go:280] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: Failed to watch *v1.DeploymentConfig: the server is currently unable to handle the request (get deploymentconfigs.apps.openshift.io)\nW0218 16:18:18.553067       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ControllerRevision ended with: too old resource version: 22021 (22292)\nW0218 16:18:18.553179       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.MutatingWebhookConfiguration ended with: too old resource version: 16720 (22292)\nW0218 16:18:18.553302       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.DaemonSet ended with: too old resource version: 22173 (22281)\nW0218 16:18:18.576509       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.ReplicaSet ended with: too old resource version: 22130 (22283)\nW0218 16:18:18.797641       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 17345 (22272)\nW0218 16:18:18.865392       1 reflector.go:299] k8s.io/client-go/metadata/metadatainformer/informer.go:89: watch of *v1.PartialObjectMetadata ended with: too old resource version: 19908 (22298)\nI0218 16:18:18.905592       1 leaderelection.go:287] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded\nF0218 16:18:18.905710       1 controllermanager.go:291] leaderelection lost\n
Feb 18 16:18:21.077 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-157-11.ec2.internal node/ip-10-0-157-11.ec2.internal container=scheduler container exited with code 255 (OOMKilled): ormers/factory.go:134: watch of *v1.StorageClass ended with: too old resource version: 16718 (22291)\nW0218 16:18:18.795060       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: too old resource version: 16701 (22271)\nW0218 16:18:18.847132       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicaSet ended with: too old resource version: 22130 (22292)\nW0218 16:18:18.847233       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: too old resource version: 16701 (22272)\nW0218 16:18:18.847286       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodDisruptionBudget ended with: too old resource version: 17621 (22286)\nW0218 16:18:18.847339       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: too old resource version: 17405 (22275)\nW0218 16:18:18.847388       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.StatefulSet ended with: too old resource version: 16896 (22292)\nW0218 16:18:18.916992       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: too old resource version: 17588 (22276)\nW0218 16:18:19.008414       1 reflector.go:299] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: watch of *v1.Pod ended with: too old resource version: 22216 (22273)\nW0218 16:18:19.008548       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: too old resource version: 22213 (22273)\nW0218 16:18:19.012001       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSINode ended with: too old resource version: 16718 (22290)\nI0218 16:18:20.434153       1 leaderelection.go:287] failed to renew lease openshift-kube-scheduler/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded\nF0218 16:18:20.434186       1 server.go:264] leaderelection lost\n
Feb 18 16:18:31.224 E ns/openshift-operator-lifecycle-manager pod/packageserver-ccf75d86f-9bd7c node/ip-10-0-134-245.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:19:05.846 E ns/openshift-monitoring pod/prometheus-adapter-5444d986d6-7cjpf node/ip-10-0-142-223.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0218 16:05:21.570244       1 adapter.go:93] successfully using in-cluster auth\nI0218 16:05:21.994399       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 18 16:19:06.479 E ns/openshift-controller-manager pod/controller-manager-xlzx5 node/ip-10-0-134-245.ec2.internal container=controller-manager container exited with code 137 (Error): 
Feb 18 16:19:08.442 E ns/openshift-image-registry pod/node-ca-d9qt9 node/ip-10-0-139-148.ec2.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:19:12.499 E ns/openshift-controller-manager pod/controller-manager-hz9ng node/ip-10-0-134-245.ec2.internal container=controller-manager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:19:13.386 E ns/openshift-monitoring pod/openshift-state-metrics-5dd7798b7-99bfk node/ip-10-0-146-83.ec2.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 18 16:19:13.885 E ns/openshift-ingress pod/router-default-6895dd7f9c-xpmpt node/ip-10-0-142-223.ec2.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:18:26.141400       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:18:31.147300       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:18:36.143884       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:18:41.160017       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:18:46.141465       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:18:51.139719       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:18:56.144305       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:01.141678       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:06.142894       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:11.290708       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 18 16:19:20.434 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-146-83.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/18 16:05:27 Watching directory: "/etc/alertmanager/config"\n
Feb 18 16:19:20.434 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-146-83.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/18 16:05:28 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/18 16:05:28 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 16:05:28 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 16:05:28 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/18 16:05:28 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/18 16:05:28 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/18 16:05:28 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/18 16:05:28 http.go:96: HTTPS: listening on [::]:9095\n
Feb 18 16:19:22.478 E ns/openshift-monitoring pod/grafana-b7c8dc5bc-x8p2h node/ip-10-0-139-148.ec2.internal container=grafana-proxy container exited with code 2 (Error): 
Feb 18 16:19:22.978 E ns/openshift-monitoring pod/node-exporter-6xdxc node/ip-10-0-142-223.ec2.internal container=node-exporter container exited with code 143 (Error): 2-18T15:59:17Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:17Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 16:19:25.465 E ns/openshift-monitoring pod/prometheus-adapter-5444d986d6-znnqs node/ip-10-0-146-83.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0218 16:05:20.953673       1 adapter.go:93] successfully using in-cluster auth\nI0218 16:05:21.875541       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 18 16:19:39.476 E ns/openshift-ingress pod/router-default-6895dd7f9c-kpqq9 node/ip-10-0-146-83.ec2.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:18:51.168108       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:18:56.149656       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:01.164556       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:06.169937       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:11.324187       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:16.317593       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:21.275713       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:26.278350       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:31.291844       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:19:36.281486       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 18 16:19:45.067 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-223.ec2.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:19:45.067 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-223.ec2.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:19:45.067 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-223.ec2.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:19:46.527 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-146-83.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-18T16:19:38.362Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-18T16:19:38.366Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-18T16:19:38.367Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-18T16:19:38.369Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-18T16:19:38.369Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-18T16:19:38.369Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-18T16:19:38.369Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-18T16:19:38.369Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-18T16:19:38.369Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-18T16:19:38.369Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-18T16:19:38.369Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-18T16:19:38.369Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-18T16:19:38.369Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-18T16:19:38.369Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-18T16:19:38.370Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-18T16:19:38.370Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-18
Feb 18 16:19:49.534 E ns/openshift-monitoring pod/node-exporter-sw9rg node/ip-10-0-139-148.ec2.internal container=node-exporter container exited with code 143 (Error): 2-18T15:59:06Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:06Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 16:19:53.547 E ns/openshift-monitoring pod/thanos-querier-696455c955-ttwvv node/ip-10-0-139-148.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/18 16:06:21 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/18 16:06:21 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 16:06:21 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 16:06:21 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/18 16:06:21 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/18 16:06:21 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/18 16:06:21 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/18 16:06:21 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/18 16:06:21 http.go:96: HTTPS: listening on [::]:9091\n
Feb 18 16:19:59.619 E ns/openshift-monitoring pod/node-exporter-rcfgz node/ip-10-0-141-138.ec2.internal container=node-exporter container exited with code 143 (Error): 2-18T15:59:28Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T15:59:28Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 16:20:00.328 E ns/openshift-controller-manager pod/controller-manager-ntwgf node/ip-10-0-157-11.ec2.internal container=controller-manager container exited with code 137 (Error): 
Feb 18 16:20:15.387 E ns/openshift-monitoring pod/node-exporter-dn8xd node/ip-10-0-157-11.ec2.internal container=node-exporter container exited with code 143 (Error): 2-18T15:58:55Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T15:58:55Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 16:20:18.147 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-223.ec2.internal container=prometheus container exited with code 1 (Error): caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-18T16:20:14.962Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-18T16:20:14.965Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-18T16:20:14.967Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-18T16:20:14.968Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-18T16:20:14.968Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-18T16:20:14.969Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-18T16:20:14.969Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-18T16:20:14.969Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-18
Feb 18 16:21:04.620 E ns/openshift-console pod/console-56749c459-rghpm node/ip-10-0-157-11.ec2.internal container=console container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:21:06.834 E ns/openshift-sdn pod/sdn-controller-rn6ch node/ip-10-0-141-138.ec2.internal container=sdn-controller container exited with code 2 (Error):  with: too old resource version: 9704 (14164)\nW0218 16:04:44.736241       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 9292 (15477)\nW0218 16:04:44.814030       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 9694 (15478)\nI0218 16:07:18.567754       1 vnids.go:115] Allocated netid 3078593 for namespace "e2e-k8s-sig-apps-deployment-upgrade-7635"\nI0218 16:07:18.586808       1 vnids.go:115] Allocated netid 1579446 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-4771"\nI0218 16:07:18.601792       1 vnids.go:115] Allocated netid 13094189 for namespace "e2e-control-plane-upgrade-8249"\nI0218 16:07:18.616821       1 vnids.go:115] Allocated netid 1964534 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-381"\nI0218 16:07:18.634437       1 vnids.go:115] Allocated netid 13725636 for namespace "e2e-k8s-service-upgrade-2037"\nI0218 16:07:18.651713       1 vnids.go:115] Allocated netid 13162999 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-1463"\nI0218 16:07:18.671993       1 vnids.go:115] Allocated netid 12470889 for namespace "e2e-k8s-sig-apps-job-upgrade-5468"\nI0218 16:07:18.687906       1 vnids.go:115] Allocated netid 6579579 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-4395"\nW0218 16:16:07.199151       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: too old resource version: 17345 (19828)\nW0218 16:16:07.368951       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.NetNamespace ended with: too old resource version: 17114 (20693)\nW0218 16:16:07.445750       1 reflector.go:299] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: too old resource version: 15477 (20694)\n
Feb 18 16:21:12.642 E ns/openshift-sdn pod/sdn-controller-whtm2 node/ip-10-0-157-11.ec2.internal container=sdn-controller container exited with code 2 (Error): I0218 15:50:09.562584       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Feb 18 16:21:16.353 E ns/openshift-sdn pod/sdn-zgf49 node/ip-10-0-142-223.ec2.internal container=sdn container exited with code 255 (Error): points for openshift-console/console:https to [10.128.0.28:8443 10.130.0.54:8443]\nI0218 16:21:02.914970    2919 roundrobin.go:218] Delete endpoint 10.129.0.33:8443 for service "openshift-console/console:https"\nI0218 16:21:03.045410    2919 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:21:03.115632    2919 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:21:03.115664    2919 proxier.go:350] userspace syncProxyRules took 70.224103ms\nI0218 16:21:03.115680    2919 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:21:03.115697    2919 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:21:03.286021    2919 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:21:03.355728    2919 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:21:03.355755    2919 proxier.go:350] userspace syncProxyRules took 69.705565ms\nI0218 16:21:03.355768    2919 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:21:04.444474    2919 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.130.0.2:6443]\nI0218 16:21:04.444519    2919 roundrobin.go:218] Delete endpoint 10.129.0.16:6443 for service "openshift-multus/multus-admission-controller:"\nI0218 16:21:04.444575    2919 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:21:04.616111    2919 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:21:04.684450    2919 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:21:04.684478    2919 proxier.go:350] userspace syncProxyRules took 68.33885ms\nI0218 16:21:04.684494    2919 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:21:15.513431    2919 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0218 16:21:15.513475    2919 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 18 16:21:26.977 E ns/openshift-sdn pod/sdn-controller-npt2m node/ip-10-0-134-245.ec2.internal container=sdn-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:21:41.057 E ns/openshift-sdn pod/sdn-wtlq7 node/ip-10-0-134-245.ec2.internal container=sdn container exited with code 255 (Error): plete\nI0218 16:21:03.478215    3109 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:21:03.478241    3109 proxier.go:350] userspace syncProxyRules took 82.118907ms\nI0218 16:21:03.478286    3109 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:21:04.441740    3109 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443 10.130.0.2:6443]\nI0218 16:21:04.441778    3109 roundrobin.go:218] Delete endpoint 10.129.0.16:6443 for service "openshift-multus/multus-admission-controller:"\nI0218 16:21:04.441911    3109 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:21:04.641168    3109 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:21:04.716105    3109 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:21:04.716129    3109 proxier.go:350] userspace syncProxyRules took 74.933306ms\nI0218 16:21:04.716140    3109 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:21:16.315702    3109 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.3:6443]\nI0218 16:21:16.315744    3109 roundrobin.go:218] Delete endpoint 10.130.0.2:6443 for service "openshift-multus/multus-admission-controller:"\nI0218 16:21:16.315801    3109 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:21:16.627320    3109 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:21:16.705636    3109 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:21:16.705677    3109 proxier.go:350] userspace syncProxyRules took 78.259387ms\nI0218 16:21:16.705689    3109 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:21:40.263089    3109 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0218 16:21:40.263154    3109 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 18 16:21:45.553 - 15s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:21:46.045 E ns/openshift-cloud-credential-operator pod/cloud-credential-operator-78bfdd8db7-hvhbq node/ip-10-0-134-245.ec2.internal container=manager container exited with code 1 (Error): ft-cloud-credential-operator/openshift-network\ntime="2020-02-18T16:18:28Z" level=debug msg="ignoring cr as it is for a different cloud" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-02-18T16:18:28Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-02-18T16:18:28Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-network secret=openshift-network-operator/installer-cloud-credentials\ntime="2020-02-18T16:18:28Z" level=debug msg="syncing cluster operator status" controller=credreq_status\ntime="2020-02-18T16:18:28Z" level=debug msg="4 cred requests" controller=credreq_status\ntime="2020-02-18T16:18:28Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded\ntime="2020-02-18T16:18:28Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message="4 of 4 credentials requests provisioned and reconciled." reason=ReconcilingComplete status=False type=Progressing\ntime="2020-02-18T16:18:28Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Available\ntime="2020-02-18T16:18:28Z" level=debug msg="set ClusterOperator condition" controller=credreq_status message= reason= status=True type=Upgradeable\ntime="2020-02-18T16:18:28Z" level=info msg="Verified cloud creds can be used for minting new creds" controller=secretannotator\ntime="2020-02-18T16:20:27Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics\ntime="2020-02-18T16:20:27Z" level=info msg="reconcile complete" controller=metrics elapsed=1.286705ms\ntime="2020-02-18T16:21:45Z" level=error msg="leader election lostunable to run the manager"\n
Feb 18 16:21:47.036 E ns/openshift-multus pod/multus-admission-controller-999n4 node/ip-10-0-134-245.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Feb 18 16:21:47.851 E ns/openshift-multus pod/multus-ldd65 node/ip-10-0-142-223.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 18 16:22:05.824 E ns/openshift-sdn pod/sdn-p2zpc node/ip-10-0-139-148.ec2.internal container=sdn container exited with code 255 (Error): 8 16:21:55.257990    3459 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:21:55.445746    3459 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:21:55.520620    3459 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:21:55.520650    3459 proxier.go:350] userspace syncProxyRules took 74.879234ms\nI0218 16:21:55.520666    3459 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:22:04.003595    3459 roundrobin.go:270] LoadBalancerRR: Setting endpoints for e2e-k8s-service-upgrade-2037/service-test: to [10.131.0.22:80]\nI0218 16:22:04.003625    3459 roundrobin.go:218] Delete endpoint 10.128.2.15:80 for service "e2e-k8s-service-upgrade-2037/service-test:"\nI0218 16:22:04.003677    3459 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:22:04.166864    3459 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:22:04.233643    3459 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:22:04.233666    3459 proxier.go:350] userspace syncProxyRules took 66.77912ms\nI0218 16:22:04.233677    3459 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:22:04.253973    3459 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-console-operator/metrics:https to [10.130.0.48:8443]\nI0218 16:22:04.253998    3459 roundrobin.go:218] Delete endpoint 10.130.0.48:8443 for service "openshift-console-operator/metrics:https"\nI0218 16:22:04.254032    3459 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:22:04.413729    3459 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:22:04.481597    3459 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:22:04.481623    3459 proxier.go:350] userspace syncProxyRules took 67.870752ms\nI0218 16:22:04.481637    3459 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nF0218 16:22:05.488490    3459 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Feb 18 16:22:11.113 E ns/openshift-service-ca pod/service-serving-cert-signer-b7874ff67-dhfsx node/ip-10-0-134-245.ec2.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Feb 18 16:22:25.106 E ns/openshift-console pod/console-56749c459-bxgwd node/ip-10-0-141-138.ec2.internal container=console container exited with code 2 (Error): -op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/02/18 16:03:26 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/02/18 16:03:36 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/02/18 16:03:46 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/02/18 16:03:56 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/02/18 16:04:06 auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com: x509: certificate signed by unknown authority\n2020/02/18 16:04:16 cmd/main: Binding to [::]:8443...\n2020/02/18 16:04:16 cmd/main: using TLS\n
Feb 18 16:22:31.140 E ns/openshift-sdn pod/sdn-fsd7x node/ip-10-0-141-138.ec2.internal container=sdn container exited with code 255 (Error): 22:24.414393   12071 roundrobin.go:218] Delete endpoint 10.129.0.67:8443 for service "openshift-console/console:https"\nI0218 16:22:24.414452   12071 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:22:24.447363   12071 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.129.0.67:8443 10.130.0.54:8443]\nI0218 16:22:24.447493   12071 roundrobin.go:218] Delete endpoint 10.128.0.28:8443 for service "openshift-console/console:https"\nI0218 16:22:24.617576   12071 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:22:24.685073   12071 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nE0218 16:22:24.685104   12071 pod.go:232] Error updating OVS multicast flows for VNID 164883: exit status 1\nI0218 16:22:24.690509   12071 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0218 16:22:24.694977   12071 pod.go:539] CNI_DEL openshift-console/console-56749c459-bxgwd\nI0218 16:22:24.707361   12071 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:22:24.707384   12071 proxier.go:350] userspace syncProxyRules took 89.783121ms\nI0218 16:22:24.707397   12071 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:22:24.707413   12071 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:22:24.916431   12071 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:22:25.018817   12071 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:22:25.018843   12071 proxier.go:350] userspace syncProxyRules took 102.384599ms\nI0218 16:22:25.018855   12071 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:22:30.596041   12071 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0218 16:22:30.596083   12071 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 18 16:22:35.278 E ns/openshift-multus pod/multus-q4qz4 node/ip-10-0-134-245.ec2.internal container=kube-multus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:22:55.989 E ns/openshift-sdn pod/sdn-fw79x node/ip-10-0-157-11.ec2.internal container=sdn container exited with code 255 (Error): 521   12695 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:22:24.446439   12695 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-console/console:https to [10.129.0.67:8443 10.130.0.54:8443]\nI0218 16:22:24.446478   12695 roundrobin.go:218] Delete endpoint 10.128.0.28:8443 for service "openshift-console/console:https"\nI0218 16:22:24.608856   12695 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:22:24.700000   12695 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:22:24.700024   12695 proxier.go:350] userspace syncProxyRules took 91.143203ms\nI0218 16:22:24.700034   12695 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:22:24.700053   12695 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:22:24.883785   12695 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:22:24.954287   12695 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:22:24.954316   12695 proxier.go:350] userspace syncProxyRules took 70.504344ms\nI0218 16:22:24.954331   12695 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:22:48.369197   12695 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0218 16:22:54.954539   12695 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:22:55.147310   12695 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:22:55.234679   12695 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:22:55.234707   12695 proxier.go:350] userspace syncProxyRules took 87.37164ms\nI0218 16:22:55.234728   12695 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:22:55.529647   12695 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0218 16:22:55.529778   12695 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 18 16:23:13.973 E ns/openshift-sdn pod/sdn-4gttg node/ip-10-0-146-83.ec2.internal container=sdn container exited with code 255 (Error): 22:42.652011   92882 cmd.go:177] openshift-sdn network plugin ready\nI0218 16:23:02.234655   92882 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.61:6443 10.129.0.68:6443]\nI0218 16:23:02.234703   92882 roundrobin.go:218] Delete endpoint 10.128.0.61:6443 for service "openshift-multus/multus-admission-controller:"\nI0218 16:23:02.234774   92882 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:23:02.406020   92882 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:23:02.475585   92882 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:23:02.475610   92882 proxier.go:350] userspace syncProxyRules took 69.561747ms\nI0218 16:23:02.475620   92882 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:23:10.377161   92882 roundrobin.go:270] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.61:6443 10.129.0.68:6443 10.130.0.57:6443]\nI0218 16:23:10.377203   92882 roundrobin.go:218] Delete endpoint 10.130.0.57:6443 for service "openshift-multus/multus-admission-controller:"\nI0218 16:23:10.377273   92882 proxy.go:334] hybrid proxy: syncProxyRules start\nI0218 16:23:10.549105   92882 proxy.go:337] hybrid proxy: mainProxy.syncProxyRules complete\nI0218 16:23:10.631965   92882 proxier.go:371] userspace proxy: processing 0 service events\nI0218 16:23:10.631993   92882 proxier.go:350] userspace syncProxyRules took 82.860258ms\nI0218 16:23:10.632005   92882 proxy.go:340] hybrid proxy: unidlingProxy.syncProxyRules complete\nI0218 16:23:11.671917   92882 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0218 16:23:13.887472   92882 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: br0 is not a bridge or a socket\nF0218 16:23:13.887523   92882 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Feb 18 16:23:29.004 E ns/openshift-multus pod/multus-xclns node/ip-10-0-146-83.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 18 16:24:10.209 E ns/openshift-multus pod/multus-j5pfv node/ip-10-0-157-11.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 18 16:24:47.583 E ns/openshift-multus pod/multus-b4bxj node/ip-10-0-141-138.ec2.internal container=kube-multus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:25:35.458 E ns/openshift-multus pod/multus-wm5kp node/ip-10-0-139-148.ec2.internal container=kube-multus container exited with code 137 (Error): 
Feb 18 16:26:06.541 E ns/openshift-dns pod/dns-default-985vl node/ip-10-0-139-148.ec2.internal container=dns container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:26:06.541 E ns/openshift-dns pod/dns-default-985vl node/ip-10-0-139-148.ec2.internal container=dns-node-resolver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:26:18.645 E ns/openshift-machine-config-operator pod/machine-config-operator-548554c669-rwbw2 node/ip-10-0-157-11.ec2.internal container=machine-config-operator container exited with code 2 (Error): ons/factory.go:101: watch of *v1.MachineConfig ended with: too old resource version: 14264 (19992)\nW0218 16:16:07.294160       1 reflector.go:299] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: watch of *v1beta1.CustomResourceDefinition ended with: too old resource version: 18638 (19826)\nW0218 16:16:07.294315       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: too old resource version: 17422 (19828)\nW0218 16:16:07.342192       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.MachineConfigPool ended with: too old resource version: 14263 (19974)\nW0218 16:16:07.396989       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.Deployment ended with: too old resource version: 18245 (19829)\nW0218 16:16:07.420614       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRoleBinding ended with: too old resource version: 15610 (19829)\nW0218 16:16:07.436761       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.DaemonSet ended with: too old resource version: 17580 (19829)\nW0218 16:16:07.584776       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 14338 (19925)\nW0218 16:16:07.668404       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 14276 (19974)\nW0218 16:16:07.698039       1 reflector.go:299] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: watch of *v1.ControllerConfig ended with: too old resource version: 14345 (19947)\nW0218 16:16:07.708811       1 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRole ended with: too old resource version: 14167 (19829)\n
Feb 18 16:28:49.656 E ns/openshift-machine-config-operator pod/machine-config-daemon-5qvhd node/ip-10-0-146-83.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 16:28:58.401 E ns/openshift-machine-config-operator pod/machine-config-daemon-zj4bg node/ip-10-0-141-138.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 16:29:15.191 E ns/openshift-machine-config-operator pod/machine-config-daemon-z5hz5 node/ip-10-0-157-11.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 16:29:30.925 E ns/openshift-machine-config-operator pod/machine-config-daemon-dz79b node/ip-10-0-139-148.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 16:32:17.023 E ns/openshift-machine-config-operator pod/machine-config-server-twwkm node/ip-10-0-141-138.ec2.internal container=machine-config-server container exited with code 2 (Error): I0218 15:54:20.445837       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0218 15:54:20.447129       1 api.go:51] Launching server on :22624\nI0218 15:54:20.447187       1 api.go:51] Launching server on :22623\nI0218 15:55:48.118624       1 api.go:97] Pool worker requested by 10.0.155.57:32030\n
Feb 18 16:32:19.756 E ns/openshift-machine-config-operator pod/machine-config-server-gmrv6 node/ip-10-0-157-11.ec2.internal container=machine-config-server container exited with code 2 (Error): I0218 15:54:20.524075       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0218 15:54:20.525577       1 api.go:51] Launching server on :22624\nI0218 15:54:20.525700       1 api.go:51] Launching server on :22623\n
Feb 18 16:32:26.854 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Feb 18 16:32:27.211 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-146-83.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/18 16:19:41 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 18 16:32:27.211 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-146-83.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 2020/02/18 16:19:44 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/18 16:19:44 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 16:19:44 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 16:19:44 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/18 16:19:44 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/18 16:19:44 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/02/18 16:19:44 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/18 16:19:44 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/18 16:19:44 http.go:96: HTTPS: listening on [::]:9091\n2020/02/18 16:20:11 oauthproxy.go:774: basicauth: 10.128.2.24:56120 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/18 16:22:03 reverseproxy.go:447: http: proxy error: context canceled\n2020/02/18 16:24:41 oauthproxy.go:774: basicauth: 10.128.2.24:58024 Authorization header does not start with 'Basic', skipping basic authentication\n2020/02/18 16:29:11 oauthproxy.go:774: basicauth: 10.128.2.24:59846 Authorization header does not start with 'Basic', skipping basic authentication\n
Feb 18 16:32:27.211 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-146-83.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-18T16:19:40.101636451Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2020-02-18T16:19:40.101772423Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-18T16:19:40.103569207Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-02-18T16:19:45.103280977Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-18T16:19:50.24421841Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 18 16:32:27.274 E ns/openshift-monitoring pod/prometheus-adapter-5655fc76d9-djvt4 node/ip-10-0-146-83.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0218 16:19:24.049756       1 adapter.go:93] successfully using in-cluster auth\nI0218 16:19:24.534888       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 18 16:32:28.338 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-146-83.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/18 16:19:42 Watching directory: "/etc/alertmanager/config"\n
Feb 18 16:32:28.338 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-146-83.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/18 16:19:42 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/18 16:19:42 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 16:19:42 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 16:19:42 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/18 16:19:42 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/18 16:19:42 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/18 16:19:42 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/18 16:19:42 http.go:96: HTTPS: listening on [::]:9095\n2020/02/18 16:22:05 reverseproxy.go:447: http: proxy error: context canceled\n
Feb 18 16:32:29.277 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-5dffwhlbx node/ip-10-0-141-138.ec2.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:32:29.309 E ns/openshift-machine-config-operator pod/machine-config-operator-7f98f9bdd5-2chzh node/ip-10-0-141-138.ec2.internal container=machine-config-operator container exited with code 2 (Error): nfig...\nE0218 16:28:13.433626       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"ebe286ab-b08f-4e61-bb2a-c85f7b898935", ResourceVersion:"29635", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717638003, loc:(*time.Location)(0x271b9e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-7f98f9bdd5-2chzh_0d376e25-b37e-4516-8382-fcae3783adf9\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-02-18T16:28:13Z\",\"renewTime\":\"2020-02-18T16:28:13Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-7f98f9bdd5-2chzh_0d376e25-b37e-4516-8382-fcae3783adf9 became leader'\nI0218 16:28:13.433716       1 leaderelection.go:251] successfully acquired lease openshift-machine-config-operator/machine-config\nI0218 16:28:13.955546       1 operator.go:246] Starting MachineConfigOperator\nI0218 16:28:13.961677       1 event.go:255] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"a0e5707c-8617-4336-a90d-545793a8ded9", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-02-18-150138}] to [{operator 0.0.1-2020-02-18-150607}]\n
Feb 18 16:32:30.229 E ns/openshift-machine-api pod/machine-api-controllers-7f9d9f5597-gz7wl node/ip-10-0-141-138.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 18 16:32:30.278 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-7b94648485-ztnvw node/ip-10-0-141-138.ec2.internal container=cluster-node-tuning-operator container exited with code 255 (Error): Map()\nI0218 16:18:02.294385       1 tuned_controller.go:323] syncDaemonSet()\nI0218 16:18:05.046297       1 tuned_controller.go:425] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0218 16:18:05.046331       1 status.go:25] syncOperatorStatus()\nI0218 16:18:05.056505       1 tuned_controller.go:188] syncServiceAccount()\nI0218 16:18:05.056646       1 tuned_controller.go:218] syncClusterRole()\nI0218 16:18:05.094799       1 tuned_controller.go:251] syncClusterRoleBinding()\nI0218 16:18:05.138441       1 tuned_controller.go:284] syncClusterConfigMap()\nI0218 16:18:05.142617       1 tuned_controller.go:284] syncClusterConfigMap()\nI0218 16:18:05.146573       1 tuned_controller.go:323] syncDaemonSet()\nI0218 16:18:25.416188       1 tuned_controller.go:425] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0218 16:18:25.416954       1 status.go:25] syncOperatorStatus()\nI0218 16:18:25.492436       1 tuned_controller.go:188] syncServiceAccount()\nI0218 16:18:25.492719       1 tuned_controller.go:218] syncClusterRole()\nI0218 16:18:25.799104       1 tuned_controller.go:251] syncClusterRoleBinding()\nI0218 16:18:25.901348       1 tuned_controller.go:284] syncClusterConfigMap()\nI0218 16:18:25.923931       1 tuned_controller.go:284] syncClusterConfigMap()\nI0218 16:18:25.946104       1 tuned_controller.go:323] syncDaemonSet()\nI0218 16:27:54.847428       1 tuned_controller.go:425] Reconciling Tuned openshift-cluster-node-tuning-operator/default\nI0218 16:27:54.847473       1 status.go:25] syncOperatorStatus()\nI0218 16:27:54.858842       1 tuned_controller.go:188] syncServiceAccount()\nI0218 16:27:54.858986       1 tuned_controller.go:218] syncClusterRole()\nI0218 16:27:54.902116       1 tuned_controller.go:251] syncClusterRoleBinding()\nI0218 16:27:54.947849       1 tuned_controller.go:284] syncClusterConfigMap()\nI0218 16:27:54.952697       1 tuned_controller.go:284] syncClusterConfigMap()\nI0218 16:27:54.958629       1 tuned_controller.go:323] syncDaemonSet()\nF0218 16:32:28.935213       1 main.go:82] <nil>\n
Feb 18 16:32:31.257 E ns/openshift-machine-api pod/machine-api-operator-59bfd6cf7b-bkjj9 node/ip-10-0-141-138.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Feb 18 16:32:32.350 E ns/openshift-operator-lifecycle-manager pod/olm-operator-6d56b8b67b-r4f9h node/ip-10-0-141-138.ec2.internal container=olm-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:32:33.410 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-6d674f5d5c-d48j6 node/ip-10-0-141-138.ec2.internal container=kube-controller-manager-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:32:44.432 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-142-223.ec2.internal container=alertmanager-proxy container exited with code 1 (Error): 2020/02/18 16:32:43 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/18 16:32:43 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 16:32:43 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 16:32:43 main.go:138: Invalid configuration:\n  unable to load OpenShift configuration: unable to retrieve authentication information for tokens: Post https://172.30.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 18 16:32:48.201 E ns/openshift-monitoring pod/prometheus-operator-858cf44b84-cwv8v node/ip-10-0-157-11.ec2.internal container=prometheus-operator container exited with code 1 (Error): ts=2020-02-18T16:32:47.257762349Z caller=main.go:199 msg="Starting Prometheus Operator version '0.34.0'."\nts=2020-02-18T16:32:47.267574257Z caller=main.go:96 msg="Staring insecure server on :8080"\nts=2020-02-18T16:32:47.269157016Z caller=main.go:315 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://172.30.0.1:443/version?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 18 16:32:51.547 E ns/openshift-machine-api pod/machine-api-controllers-7f9d9f5597-k7qgt node/ip-10-0-134-245.ec2.internal container=machine-controller container exited with code 255 (Error): 
Feb 18 16:32:51.547 E ns/openshift-machine-api pod/machine-api-controllers-7f9d9f5597-k7qgt node/ip-10-0-134-245.ec2.internal container=machine-healthcheck-controller container exited with code 255 (Error): 
Feb 18 16:32:53.395 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-139-148.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-18T16:32:47.596Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-18T16:32:47.599Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-18T16:32:47.600Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-18T16:32:47.601Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-18T16:32:47.601Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-18T16:32:47.601Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-18T16:32:47.601Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-18T16:32:47.601Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-18T16:32:47.601Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-18T16:32:47.601Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-18T16:32:47.601Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-18T16:32:47.601Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-18T16:32:47.601Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-18T16:32:47.601Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-18T16:32:47.602Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-18T16:32:47.602Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-18
Feb 18 16:33:22.644 E ns/openshift-cluster-node-tuning-operator pod/tuned-vkgrc node/ip-10-0-134-245.ec2.internal container=tuned container exited with code 143 (Error): ing recommended profile...\nI0218 16:29:56.574785     624 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 16:32:24.082544     624 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-server-db9ns) labels changed node wide: true\nI0218 16:32:26.451414     624 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:32:26.453366     624 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:32:26.630685     624 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 16:32:26.760394     624 openshift-tuned.go:550] Pod (openshift-cluster-version/cluster-version-operator-664b9488c-qfplr) labels changed node wide: true\nI0218 16:32:31.451604     624 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:32:31.454162     624 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:32:31.616675     624 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 16:32:32.127746     624 openshift-tuned.go:550] Pod (openshift-kube-controller-manager-operator/kube-controller-manager-operator-6d674f5d5c-7v5fq) labels changed node wide: true\nI0218 16:32:36.452583     624 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:32:36.459345     624 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:32:36.847423     624 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 16:32:43.194877     624 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0218 16:32:43.201381     624 openshift-tuned.go:881] Pod event watch channel closed.\nI0218 16:32:43.201474     624 openshift-tuned.go:883] Increasing resyncPeriod to 134\n
Feb 18 16:34:51.524 E ns/openshift-cluster-node-tuning-operator pod/tuned-kcpt4 node/ip-10-0-146-83.ec2.internal container=tuned container exited with code 143 (Error): go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:23:36.353146   75135 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:23:36.467883   75135 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:27:32.499571   75135 openshift-tuned.go:550] Pod (openshift-dns/dns-default-4fhzm) labels changed node wide: true\nI0218 16:27:36.351097   75135 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:27:36.353111   75135 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:27:36.467715   75135 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:28:54.317329   75135 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-daemon-5qvhd) labels changed node wide: true\nI0218 16:28:56.351056   75135 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:28:56.353421   75135 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:28:56.464893   75135 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:32:34.338469   75135 openshift-tuned.go:550] Pod (openshift-monitoring/grafana-669844cfcc-pfr9h) labels changed node wide: true\nI0218 16:32:36.351059   75135 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:32:36.354313   75135 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:32:36.470134   75135 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:32:43.171197   75135 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0218 16:32:43.175216   75135 openshift-tuned.go:881] Pod event watch channel closed.\nI0218 16:32:43.175236   75135 openshift-tuned.go:883] Increasing resyncPeriod to 114\n
Feb 18 16:34:51.572 E ns/openshift-monitoring pod/node-exporter-s5hww node/ip-10-0-146-83.ec2.internal container=node-exporter container exited with code 143 (Error): 2-18T16:19:17Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:17Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 16:34:51.642 E ns/openshift-multus pod/multus-98fmg node/ip-10-0-146-83.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 16:34:51.673 E ns/openshift-machine-config-operator pod/machine-config-daemon-js9h2 node/ip-10-0-146-83.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 16:34:56.059 E ns/openshift-multus pod/multus-98fmg node/ip-10-0-146-83.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 18 16:35:00.192 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-138.ec2.internal node/ip-10-0-141-138.ec2.internal container=kube-apiserver-7 container exited with code 1 (Error): r: mvcc: required revision has been compacted\nE0218 16:32:42.971797       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.971986       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.972166       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.972196       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.972196       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.972494       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.972599       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.979352       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.979576       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.979725       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.979797       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.979937       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:32:42.983439       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0218 16:32:43.053499       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-141-138.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0218 16:32:43.053755       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Feb 18 16:35:00.192 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-138.ec2.internal node/ip-10-0-141-138.ec2.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0218 16:16:08.927210       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 18 16:35:00.192 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-141-138.ec2.internal node/ip-10-0-141-138.ec2.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 16:26:14.495185       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:26:14.495688       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 16:26:14.703289       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:26:14.703632       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 18 16:35:00.249 E ns/openshift-cluster-node-tuning-operator pod/tuned-rt9db node/ip-10-0-141-138.ec2.internal container=tuned container exited with code 143 (Error): 141-138.ec2.internal) labels changed node wide: false\nI0218 16:32:31.104098     745 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-5-ip-10-0-141-138.ec2.internal) labels changed node wide: false\nI0218 16:32:31.299540     745 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-4-ip-10-0-141-138.ec2.internal) labels changed node wide: false\nI0218 16:32:31.486410     745 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-8-ip-10-0-141-138.ec2.internal) labels changed node wide: false\nI0218 16:32:31.682778     745 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-7-ip-10-0-141-138.ec2.internal) labels changed node wide: false\nI0218 16:32:31.887633     745 openshift-tuned.go:550] Pod (openshift-kube-scheduler/installer-2-ip-10-0-141-138.ec2.internal) labels changed node wide: false\nI0218 16:32:32.290230     745 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-6-ip-10-0-141-138.ec2.internal) labels changed node wide: false\nI0218 16:32:32.500274     745 openshift-tuned.go:550] Pod (openshift-kube-apiserver/installer-3-ip-10-0-141-138.ec2.internal) labels changed node wide: true\nI0218 16:32:36.273104     745 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:32:36.274984     745 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:32:36.509284     745 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 16:32:39.534063     745 openshift-tuned.go:550] Pod (openshift-network-operator/network-operator-6d8c69b5f-km7r9) labels changed node wide: true\nI0218 16:32:41.273051     745 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:32:41.275030     745 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:32:41.412075     745 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\n
Feb 18 16:35:00.286 E ns/openshift-monitoring pod/node-exporter-2jg4g node/ip-10-0-141-138.ec2.internal container=node-exporter container exited with code 143 (Error): 2-18T16:20:13Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T16:20:13Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 16:35:00.299 E ns/openshift-controller-manager pod/controller-manager-nzml4 node/ip-10-0-141-138.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 18 16:35:00.316 E ns/openshift-sdn pod/sdn-controller-484gf node/ip-10-0-141-138.ec2.internal container=sdn-controller container exited with code 2 (Error): I0218 16:21:11.115128       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0218 16:21:11.141088       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"1c2a51c5-631f-41cd-8732-24a744ba842f", ResourceVersion:"25619", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717637807, loc:(*time.Location)(0x2b77ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-141-138\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-18T15:50:07Z\",\"renewTime\":\"2020-02-18T16:21:11Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-141-138 became leader'\nI0218 16:21:11.141171       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0218 16:21:11.147795       1 master.go:51] Initializing SDN master\nI0218 16:21:11.167624       1 network_controller.go:60] Started OpenShift Network Controller\n
Feb 18 16:35:00.376 E ns/openshift-multus pod/multus-admission-controller-m6p4b node/ip-10-0-141-138.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 18 16:35:00.401 E ns/openshift-multus pod/multus-sxtcd node/ip-10-0-141-138.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 16:35:00.423 E ns/openshift-machine-config-operator pod/machine-config-daemon-fpd8k node/ip-10-0-141-138.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 16:35:00.468 E ns/openshift-machine-config-operator pod/machine-config-server-cksqx node/ip-10-0-141-138.ec2.internal container=machine-config-server container exited with code 2 (Error): I0218 16:32:18.910817       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0218 16:32:18.912271       1 api.go:51] Launching server on :22624\nI0218 16:32:18.912321       1 api.go:51] Launching server on :22623\n
Feb 18 16:35:01.960 E ns/openshift-machine-config-operator pod/machine-config-daemon-js9h2 node/ip-10-0-146-83.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 16:35:03.699 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-138.ec2.internal node/ip-10-0-141-138.ec2.internal container=cluster-policy-controller-8 container exited with code 1 (Error): s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found]\nW0218 16:24:51.722329       1 reflector.go:289] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: The resourceVersion for the provided watch is too old.\nW0218 16:27:07.044625       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.\nW0218 16:27:07.056353       1 reflector.go:289] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: The resourceVersion for the provided watch is too old.\nW0218 16:28:02.068340       1 reflector.go:289] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: The resourceVersion for the provided watch is too old.\n
Feb 18 16:35:03.699 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-138.ec2.internal node/ip-10-0-141-138.ec2.internal container=kube-controller-manager-cert-syncer-8 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:31:25.301113       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:31:25.301491       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:31:35.311542       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:31:35.311882       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:31:45.321822       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:31:45.322250       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:31:55.330535       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:31:55.331385       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:32:05.340124       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:32:05.340556       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:32:15.347821       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:32:15.348198       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:32:25.356194       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:32:25.356545       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:32:35.367374       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:32:35.367843       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Feb 18 16:35:03.699 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-141-138.ec2.internal node/ip-10-0-141-138.ec2.internal container=kube-controller-manager-8 container exited with code 2 (Error): tes.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1582042491" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582042491" (2020-02-18 15:14:51 +0000 UTC to 2021-02-17 15:14:51 +0000 UTC (now=2020-02-18 16:14:51.618193803 +0000 UTC))\nI0218 16:14:51.618289       1 secure_serving.go:178] Serving securely on [::]:10257\nI0218 16:14:51.618349       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0218 16:14:51.619510       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nE0218 16:16:07.263788       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps/kube-controller-manager?timeout=10s: dial tcp [::1]:6443: connect: connection refused\nE0218 16:16:14.271931       1 webhook.go:107] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\nE0218 16:16:14.275583       1 authentication.go:89] Unable to authenticate the request due to an error: [invalid bearer token, tokenreviews.authentication.k8s.io is forbidden: User "system:kube-controller-manager" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope]\nE0218 16:16:14.290200       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: configmaps "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-controller-manager" not found, role.rbac.authorization.k8s.io "system:openshift:leader-election-lock-kube-controller-manager" not found]\n
Feb 18 16:35:04.845 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-141-138.ec2.internal node/ip-10-0-141-138.ec2.internal container=scheduler container exited with code 2 (Error): /factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0218 16:16:14.090895       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0218 16:16:14.091005       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: unknown (get pods)\nE0218 16:16:14.176128       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0218 16:16:14.196036       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0218 16:16:14.247827       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)\nE0218 16:16:14.247857       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0218 16:16:14.247894       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0218 16:16:14.247917       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)\nE0218 16:16:14.247998       1 reflector.go:280] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0218 16:16:14.304677       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0218 16:16:14.312529       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0218 16:16:14.335192       1 reflector.go:280] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\n
Feb 18 16:35:09.256 E ns/openshift-multus pod/multus-sxtcd node/ip-10-0-141-138.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 18 16:35:12.310 E ns/openshift-machine-config-operator pod/machine-config-daemon-fpd8k node/ip-10-0-141-138.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 16:35:19.980 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 18 16:35:28.365 E ns/openshift-console-operator pod/console-operator-7cb747bcf8-9w54g node/ip-10-0-134-245.ec2.internal container=console-operator container exited with code 255 (Error): event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"1a70909a-240f-4d0f-8f24-94ff2ed20a00", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "" to "RouteSyncDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)"\nI0218 16:33:15.486559       1 status_controller.go:165] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-18T15:58:18Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-18T16:23:10Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-18T16:23:10Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-18T15:58:18Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0218 16:33:15.493508       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"1a70909a-240f-4d0f-8f24-94ff2ed20a00", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Degraded message changed from "RouteSyncDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)" to ""\nE0218 16:33:18.307424       1 controller.go:280] clidownloads-sync-work-queue-key failed with : the server is currently unable to handle the request (get routes.route.openshift.io downloads)\nE0218 16:33:21.380062       1 controller.go:280] clidownloads-sync-work-queue-key failed with : the server is currently unable to handle the request (get routes.route.openshift.io downloads)\nI0218 16:35:26.735147       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 16:35:26.735289       1 leaderelection.go:66] leaderelection lost\n
Feb 18 16:35:29.546 E ns/openshift-authentication pod/oauth-openshift-74f59686f8-x7m9h node/ip-10-0-134-245.ec2.internal container=oauth-openshift container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:35:32.024 E ns/openshift-machine-api pod/machine-api-controllers-7f9d9f5597-k7qgt node/ip-10-0-134-245.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 18 16:35:32.024 E ns/openshift-machine-api pod/machine-api-controllers-7f9d9f5597-k7qgt node/ip-10-0-134-245.ec2.internal container=machine-healthcheck-controller container exited with code 255 (Error): 
Feb 18 16:35:32.024 E ns/openshift-machine-api pod/machine-api-controllers-7f9d9f5597-k7qgt node/ip-10-0-134-245.ec2.internal container=machine-controller container exited with code 255 (Error): 
Feb 18 16:35:32.068 E ns/openshift-service-ca-operator pod/service-ca-operator-887c99b87-kdm54 node/ip-10-0-134-245.ec2.internal container=operator container exited with code 255 (Error): 
Feb 18 16:35:33.045 E ns/openshift-service-ca pod/apiservice-cabundle-injector-855d88b757-kddg2 node/ip-10-0-134-245.ec2.internal container=apiservice-cabundle-injector-controller container exited with code 255 (Error): 
Feb 18 16:35:33.062 E ns/openshift-service-ca pod/service-serving-cert-signer-b7874ff67-dhfsx node/ip-10-0-134-245.ec2.internal container=service-serving-cert-signer-controller container exited with code 255 (Error): 
Feb 18 16:35:33.119 E ns/openshift-service-ca pod/configmap-cabundle-injector-56c7c89ffc-ttf62 node/ip-10-0-134-245.ec2.internal container=configmap-cabundle-injector-controller container exited with code 255 (Error): 
Feb 18 16:35:33.490 E ns/openshift-operator-lifecycle-manager pod/packageserver-8449bc6b55-v8p5x node/ip-10-0-141-138.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:35:40.110 E ns/openshift-monitoring pod/openshift-state-metrics-6d8494564d-j9gmz node/ip-10-0-139-148.ec2.internal container=openshift-state-metrics container exited with code 2 (Error): 
Feb 18 16:35:40.170 E ns/openshift-monitoring pod/prometheus-adapter-5655fc76d9-dwh5q node/ip-10-0-139-148.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0218 16:32:38.340799       1 adapter.go:93] successfully using in-cluster auth\nI0218 16:32:39.191521       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 18 16:35:40.274 E ns/openshift-monitoring pod/telemeter-client-7dd6dcd44c-55m4m node/ip-10-0-139-148.ec2.internal container=reload container exited with code 2 (Error): 
Feb 18 16:35:40.274 E ns/openshift-monitoring pod/telemeter-client-7dd6dcd44c-55m4m node/ip-10-0-139-148.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Feb 18 16:35:47.156 E ns/openshift-monitoring pod/prometheus-adapter-5655fc76d9-5d82n node/ip-10-0-146-83.ec2.internal container=prometheus-adapter container exited with code 255 (Error): I0218 16:35:46.802466       1 adapter.go:93] successfully using in-cluster auth\nF0218 16:35:46.810908       1 adapter.go:289] unable to install resource metrics API: unable to construct dynamic discovery mapper: unable to populate initial set of REST mappings: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 18 16:35:50.171 E ns/openshift-operator-lifecycle-manager pod/packageserver-79fd967559-56jrq node/ip-10-0-141-138.ec2.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-02-18T16:35:48Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 18 16:35:50.723 E ns/openshift-operator-lifecycle-manager pod/packageserver-77d6d56b8b-t5rn9 node/ip-10-0-141-138.ec2.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-02-18T16:35:48Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Feb 18 16:36:00.553 - 30s   E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:36:01.126 E ns/openshift-machine-api pod/machine-api-controllers-7f9d9f5597-2pfqf node/ip-10-0-141-138.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 18 16:36:10.175 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Feb 18 16:37:00.553 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:37:09.763 E ns/openshift-operator-lifecycle-manager pod/packageserver-77d6d56b8b-t5rn9 node/ip-10-0-141-138.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:37:45.851 E ns/openshift-marketplace pod/redhat-operators-bbdd79c68-qhrvf node/ip-10-0-146-83.ec2.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:37:57.966 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-134-245.ec2.internal node/ip-10-0-134-245.ec2.internal container=scheduler container exited with code 2 (Error): PU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 16:35:39.186750       1 scheduler.go:667] pod openshift-ingress/router-default-6c4bc9fbb6-gnx7l is bound successfully on node "ip-10-0-146-83.ec2.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 16:35:39.770923       1 scheduler.go:667] pod e2e-k8s-sig-apps-job-upgrade-5468/foo-jnxwv is bound successfully on node "ip-10-0-146-83.ec2.internal", 6 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 16:35:39.829878       1 scheduler.go:667] pod e2e-k8s-service-upgrade-2037/service-test-wbrq7 is bound successfully on node "ip-10-0-146-83.ec2.internal", 6 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<4>|Memory<16419384Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804984Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 16:35:40.086873       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-666c84877d-k8bnq: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0218 16:35:40.129567       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-666c84877d-k8bnq: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Feb 18 16:37:58.057 E ns/openshift-controller-manager pod/controller-manager-4twpl node/ip-10-0-134-245.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 18 16:37:58.086 E ns/openshift-monitoring pod/node-exporter-hxlcw node/ip-10-0-134-245.ec2.internal container=node-exporter container exited with code 143 (Error): 2-18T16:19:46Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:46Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 16:37:58.114 E ns/openshift-multus pod/multus-admission-controller-5g88g node/ip-10-0-134-245.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 18 16:37:58.131 E ns/openshift-sdn pod/sdn-controller-qb44g node/ip-10-0-134-245.ec2.internal container=sdn-controller container exited with code 2 (Error): I0218 16:21:40.575025       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0218 16:34:00.357504       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"1c2a51c5-631f-41cd-8732-24a744ba842f", ResourceVersion:"32438", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717637807, loc:(*time.Location)(0x2b77ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-134-245\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-02-18T16:34:00Z\",\"renewTime\":\"2020-02-18T16:34:00Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-134-245 became leader'\nI0218 16:34:00.357625       1 leaderelection.go:251] successfully acquired lease openshift-sdn/openshift-network-controller\nI0218 16:34:00.363465       1 master.go:51] Initializing SDN master\nI0218 16:34:00.382291       1 network_controller.go:60] Started OpenShift Network Controller\n
Feb 18 16:37:58.153 E ns/openshift-sdn pod/ovs-xpx5v node/ip-10-0-134-245.ec2.internal container=openvswitch container exited with code 143 (Error): 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T16:35:41.198Z|00493|connmgr|INFO|br0<->unix#943: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T16:35:41.259Z|00494|bridge|INFO|bridge br0: deleted interface vethf9dc5ea9 on port 31\n2020-02-18T16:35:41.266Z|00495|bridge|WARN|could not open network device veth286a9770 (No such device)\n2020-02-18T16:35:41.271Z|00496|bridge|WARN|could not open network device veth225fe2ef (No such device)\n2020-02-18T16:35:41.276Z|00497|bridge|WARN|could not open network device veth286a9770 (No such device)\n2020-02-18T16:35:41.278Z|00498|bridge|WARN|could not open network device veth225fe2ef (No such device)\n2020-02-18T16:35:41.300Z|00499|bridge|WARN|could not open network device veth286a9770 (No such device)\n2020-02-18T16:35:41.308Z|00500|bridge|WARN|could not open network device veth225fe2ef (No such device)\n2020-02-18T16:35:41.322Z|00501|bridge|WARN|could not open network device veth286a9770 (No such device)\n2020-02-18T16:35:41.324Z|00502|bridge|WARN|could not open network device veth225fe2ef (No such device)\nExiting ovs-vswitchd (12287).\n2020-02-18T16:35:42.202Z|00503|bridge|INFO|bridge br0: deleted interface veth72b42774 on port 21\n2020-02-18T16:35:42.202Z|00504|bridge|INFO|bridge br0: deleted interface tun0 on port 2\n2020-02-18T16:35:42.202Z|00505|bridge|INFO|bridge br0: deleted interface vethd91fbbe1 on port 13\n2020-02-18T16:35:42.202Z|00506|bridge|INFO|bridge br0: deleted interface vethc8925f33 on port 7\n2020-02-18T16:35:42.202Z|00507|bridge|INFO|bridge br0: deleted interface vethc58a76f9 on port 17\n2020-02-18T16:35:42.202Z|00508|bridge|INFO|bridge br0: deleted interface vethdd3da96e on port 23\n2020-02-18T16:35:42.202Z|00509|bridge|INFO|bridge br0: deleted interface br0 on port 65534\n2020-02-18T16:35:42.202Z|00510|bridge|INFO|bridge br0: deleted interface vxlan0 on port 1\n2020-02-18T16:35:42.202Z|00511|bridge|INFO|bridge br0: deleted interface vethf313c010 on port 24\n2020-02-18T16:35:42.385Z|00002|daemon_unix(monitor)|INFO|pid 12287 died, exit status 0, exiting\nTerminated\n
Feb 18 16:37:58.290 E ns/openshift-multus pod/multus-bwnr6 node/ip-10-0-134-245.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 16:37:58.312 E ns/openshift-machine-config-operator pod/machine-config-server-fz4qv node/ip-10-0-134-245.ec2.internal container=machine-config-server container exited with code 2 (Error): I0218 16:32:24.981937       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0218 16:32:24.983354       1 api.go:51] Launching server on :22624\nI0218 16:32:24.983408       1 api.go:51] Launching server on :22623\n
Feb 18 16:37:58.338 E ns/openshift-cluster-node-tuning-operator pod/tuned-x282m node/ip-10-0-134-245.ec2.internal container=tuned container exited with code 143 (Error): internal) labels changed node wide: false\nI0218 16:35:27.778040   36397 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-5-ip-10-0-134-245.ec2.internal) labels changed node wide: false\nI0218 16:35:27.964925   36397 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-7-ip-10-0-134-245.ec2.internal) labels changed node wide: false\nI0218 16:35:28.172304   36397 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/installer-8-ip-10-0-134-245.ec2.internal) labels changed node wide: false\nI0218 16:35:29.357891   36397 openshift-tuned.go:550] Pod (openshift-kube-controller-manager/revision-pruner-5-ip-10-0-134-245.ec2.internal) labels changed node wide: false\nI0218 16:35:29.683826   36397 openshift-tuned.go:550] Pod (openshift-cluster-version/cluster-version-operator-664b9488c-qfplr) labels changed node wide: true\nI0218 16:35:33.529460   36397 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:35:33.531716   36397 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:35:33.712837   36397 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 16:35:37.199642   36397 openshift-tuned.go:550] Pod (openshift-network-operator/network-operator-6d8c69b5f-wqqbf) labels changed node wide: true\nI0218 16:35:38.528139   36397 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:35:38.534788   36397 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:35:39.117163   36397 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 16:35:41.729606   36397 openshift-tuned.go:550] Pod (openshift-etcd/etcd-member-ip-10-0-134-245.ec2.internal) labels changed node wide: true\nI0218 16:35:42.317003   36397 openshift-tuned.go:137] Received signal: terminated\nI0218 16:35:42.317076   36397 openshift-tuned.go:304] Sending TERM to PID 36548\n
Feb 18 16:37:58.428 E ns/openshift-machine-config-operator pod/machine-config-daemon-c5xdd node/ip-10-0-134-245.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 16:38:01.858 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-146-83.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-18T16:37:59.440Z caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-18T16:37:59.447Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-18T16:37:59.447Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-18T16:37:59.449Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-18T16:37:59.449Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-18T16:37:59.449Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-18T16:37:59.449Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-18T16:37:59.449Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-18T16:37:59.449Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-18T16:37:59.449Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-18T16:37:59.449Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-18T16:37:59.449Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-18T16:37:59.449Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-18T16:37:59.449Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-18T16:37:59.450Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-18T16:37:59.450Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-18
Feb 18 16:38:02.058 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-245.ec2.internal node/ip-10-0-134-245.ec2.internal container=kube-apiserver-7 container exited with code 1 (Error): n has been compacted\nE0218 16:35:41.779424       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779473       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779500       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779438       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779475       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779537       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779715       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779780       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779870       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779915       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779966       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:41.779981       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:35:42.124298       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}\nI0218 16:35:42.264270       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-134-245.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0218 16:35:42.264477       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Feb 18 16:38:02.058 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-245.ec2.internal node/ip-10-0-134-245.ec2.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0218 16:14:13.342189       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 18 16:38:02.058 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-134-245.ec2.internal node/ip-10-0-134-245.ec2.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 16:34:18.948304       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:34:18.948686       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 16:34:19.156952       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:34:19.157775       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 18 16:38:02.105 E ns/openshift-multus pod/multus-bwnr6 node/ip-10-0-134-245.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 18 16:38:02.138 E ns/openshift-monitoring pod/node-exporter-n5t8z node/ip-10-0-139-148.ec2.internal container=node-exporter container exited with code 143 (Error): 2-18T16:19:57Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:57Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 16:38:02.207 E ns/openshift-multus pod/multus-wth4c node/ip-10-0-139-148.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 16:38:02.227 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-245.ec2.internal node/ip-10-0-134-245.ec2.internal container=cluster-policy-controller-8 container exited with code 1 (Error): tor for resource "operators.coreos.com/v1alpha1, Resource=catalogsources": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=catalogsources", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machines": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machines", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinesets": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinesets", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=subscriptions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=subscriptions", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=servicemonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=servicemonitors", couldn't start monitor for resource "operators.coreos.com/v2, Resource=catalogsourceconfigs": unable to monitor quota for resource "operators.coreos.com/v2, Resource=catalogsourceconfigs"]\nI0218 16:33:50.174045       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0218 16:33:50.174059       1 policy_controller.go:147] Started Origin Controllers\nI0218 16:33:50.177471       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0218 16:33:50.177530       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0218 16:33:50.177551       1 controller_utils.go:1027] Waiting for caches to sync for cluster resource quota controller\nI0218 16:33:50.177934       1 resource_quota_monitor.go:301] QuotaMonitor running\nI0218 16:33:50.248698       1 controller_utils.go:1034] Caches are synced for resource quota controller\nI0218 16:33:50.312419       1 controller_utils.go:1034] Caches are synced for namespace-security-allocation-controller controller\nI0218 16:33:50.782807       1 controller_utils.go:1034] Caches are synced for cluster resource quota controller\n
Feb 18 16:38:02.227 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-245.ec2.internal node/ip-10-0-134-245.ec2.internal container=kube-controller-manager-cert-syncer-8 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:34:27.495660       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:34:27.496020       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:34:37.506755       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:34:37.507127       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:34:47.519473       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:34:47.520051       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:34:57.531178       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:34:57.531635       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:35:07.541979       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:35:07.543348       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:35:17.550344       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:35:17.550857       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:35:27.589787       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:35:27.590234       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:35:37.609119       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:35:37.609740       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Feb 18 16:38:02.227 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-134-245.ec2.internal node/ip-10-0-134-245.ec2.internal container=kube-controller-manager-8 container exited with code 2 (Error): 16:35:39.017473       1 replica_set.go:561] Too few replicas for ReplicaSet e2e-k8s-sig-apps-deployment-upgrade-7635/dp-657fc4b57d, need 1, creating 1\nI0218 16:35:39.083319       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"e2e-k8s-sig-apps-deployment-upgrade-7635", Name:"dp-657fc4b57d", UID:"a156a008-bb8e-4b8e-b471-7630aedb33d3", APIVersion:"apps/v1", ResourceVersion:"17671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dp-657fc4b57d-58ndt\nI0218 16:35:39.092090       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-ingress", Name:"router-default-6c4bc9fbb6", UID:"b7aa7f52-2644-43e0-a057-cc5e75e593cb", APIVersion:"apps/v1", ResourceVersion:"31821", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: router-default-6c4bc9fbb6-gnx7l\nI0218 16:35:39.549178       1 replica_set.go:561] Too few replicas for ReplicationController e2e-k8s-service-upgrade-2037/service-test, need 2, creating 1\nW0218 16:35:39.666676       1 reflector.go:299] k8s.io/client-go/metadata/metadatainformer/informer.go:89: watch of *v1.PartialObjectMetadata ended with: too old resource version: 33920 (34126)\nI0218 16:35:39.695771       1 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"e2e-k8s-sig-apps-job-upgrade-5468", Name:"foo", UID:"d51d2157-d689-41a4-8ab6-5ab3c0029921", APIVersion:"batch/v1", ResourceVersion:"17329", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: foo-jnxwv\nI0218 16:35:39.802978       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"e2e-k8s-service-upgrade-2037", Name:"service-test", UID:"61cc0122-a098-48b8-8ff5-552fae21f58b", APIVersion:"v1", ResourceVersion:"31590", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: service-test-wbrq7\nE0218 16:35:41.863950       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request\n
Feb 18 16:38:02.248 E ns/openshift-machine-config-operator pod/machine-config-daemon-sf4h8 node/ip-10-0-139-148.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 16:38:02.276 E ns/openshift-cluster-node-tuning-operator pod/tuned-p5rl9 node/ip-10-0-139-148.ec2.internal container=tuned container exited with code 143 (Error): in_sysctl: reapplying system sysctl\n2020-02-18 16:33:28,954 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0218 16:33:35.347359   49758 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-pq72f) labels changed node wide: false\nI0218 16:35:39.042647   49758 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-7635/dp-657fc4b57d-9ft76) labels changed node wide: true\nI0218 16:35:43.538405   49758 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:35:43.540552   49758 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:35:43.660329   49758 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:35:43.712654   49758 openshift-tuned.go:550] Pod (openshift-marketplace/community-operators-557f76db96-v9klt) labels changed node wide: true\nI0218 16:35:48.538391   49758 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:35:48.541079   49758 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:35:48.684632   49758 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:36:05.351522   49758 openshift-tuned.go:550] Pod (openshift-monitoring/telemeter-client-7dd6dcd44c-55m4m) labels changed node wide: true\nI0218 16:36:08.538372   49758 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:36:08.540054   49758 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:36:08.689698   49758 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:36:10.104968   49758 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-5468/foo-xh4kc) labels changed node wide: false\nI0218 16:36:12.101536   49758 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-5468/foo-zf8sw) labels changed node wide: true\n
Feb 18 16:38:04.828 E ns/openshift-multus pod/multus-wth4c node/ip-10-0-139-148.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 18 16:38:06.888 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-146-83.ec2.internal container=alertmanager-proxy container exited with code 1 (Error): 2020/02/18 16:37:31 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/18 16:37:31 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 16:37:31 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 16:38:05 main.go:138: Invalid configuration:\n  unable to load OpenShift configuration: unable to retrieve authentication information for tokens: Timeout: request did not complete within requested timeout 34s\n
Feb 18 16:38:07.745 E ns/openshift-machine-config-operator pod/machine-config-daemon-c5xdd node/ip-10-0-134-245.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 16:38:11.301 E ns/openshift-machine-config-operator pod/machine-config-daemon-sf4h8 node/ip-10-0-139-148.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 16:38:13.930 E ns/openshift-marketplace pod/redhat-operators-8cb867f5d-rwkjb node/ip-10-0-146-83.ec2.internal container=redhat-operators container exited with code 2 (Error): 
Feb 18 16:38:18.071 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Feb 18 16:38:19.490 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-223.ec2.internal container=prometheus container exited with code 1 (Error): caller=web.go:496 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-02-18T16:20:14.962Z caller=main.go:657 msg="Starting TSDB ..."\nlevel=info ts=2020-02-18T16:20:14.965Z caller=head.go:535 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-02-18T16:20:14.967Z caller=head.go:583 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:672 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:673 msg="TSDB started"\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:743 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:526 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:540 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:562 msg="Stopping scrape manager..."\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:536 msg="Notify discovery manager stopped"\nlevel=info ts=2020-02-18T16:20:14.968Z caller=main.go:522 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-02-18T16:20:14.968Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-02-18T16:20:14.968Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-02-18T16:20:14.969Z caller=main.go:556 msg="Scrape manager stopped"\nlevel=info ts=2020-02-18T16:20:14.969Z caller=notifier.go:602 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-02-18T16:20:14.969Z caller=main.go:727 msg="Notifier manager stopped"\nlevel=error ts=2020-02-18
Feb 18 16:38:19.490 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-223.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/02/18 16:20:16 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Feb 18 16:38:19.490 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-223.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-02-18T16:20:16.211713769Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.9'."\nlevel=info ts=2020-02-18T16:20:16.211845682Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-02-18T16:20:16.215293228Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-02-18T16:20:21.338513163Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Feb 18 16:38:19.518 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-223.ec2.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:38:19.518 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-223.ec2.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:38:19.518 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-223.ec2.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:38:19.554 E ns/openshift-monitoring pod/grafana-669844cfcc-9gngc node/ip-10-0-142-223.ec2.internal container=grafana-proxy container exited with code 2 (Error): 
Feb 18 16:38:19.570 E ns/openshift-monitoring pod/prometheus-adapter-5655fc76d9-tl5cc node/ip-10-0-142-223.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0218 16:19:04.795374       1 adapter.go:93] successfully using in-cluster auth\nI0218 16:19:05.257195       1 secure_serving.go:116] Serving securely on [::]:6443\n
Feb 18 16:38:19.613 E ns/openshift-monitoring pod/thanos-querier-84f4bdf47f-dn8sv node/ip-10-0-142-223.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/02/18 16:19:10 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/18 16:19:10 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 16:19:10 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 16:19:10 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/02/18 16:19:10 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/18 16:19:10 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/02/18 16:19:10 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/18 16:19:10 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/02/18 16:19:10 http.go:96: HTTPS: listening on [::]:9091\n
Feb 18 16:38:19.635 E ns/openshift-ingress pod/router-default-6c4bc9fbb6-kwcmj node/ip-10-0-142-223.ec2.internal container=router container exited with code 2 (Error): lhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:37:02.510521       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:37:08.381765       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:37:13.365950       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:37:32.768086       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:37:37.763415       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:37:42.763116       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:37:47.759871       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:38:03.553048       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:38:08.550635       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\nI0218 16:38:13.568633       1 router.go:548] template "level"=0 "msg"="router reloaded"  "output"=" - Proxy protocol on, checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"\n
Feb 18 16:38:19.693 E ns/openshift-monitoring pod/kube-state-metrics-67fc776dd4-nzdrx node/ip-10-0-142-223.ec2.internal container=kube-rbac-proxy-main container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:38:19.693 E ns/openshift-monitoring pod/kube-state-metrics-67fc776dd4-nzdrx node/ip-10-0-142-223.ec2.internal container=kube-state-metrics container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:38:19.693 E ns/openshift-monitoring pod/kube-state-metrics-67fc776dd4-nzdrx node/ip-10-0-142-223.ec2.internal container=kube-rbac-proxy-self container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:38:20.563 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-223.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/02/18 16:20:03 Watching directory: "/etc/alertmanager/config"\n
Feb 18 16:38:20.563 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-223.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/02/18 16:20:03 provider.go:117: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/18 16:20:03 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/02/18 16:20:03 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/02/18 16:20:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/02/18 16:20:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/02/18 16:20:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/02/18 16:20:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/02/18 16:20:03 http.go:96: HTTPS: listening on [::]:9095\n2020/02/18 16:22:00 reverseproxy.go:447: http: proxy error: context canceled\n2020/02/18 16:22:04 reverseproxy.go:447: http: proxy error: context canceled\n
Feb 18 16:38:27.225 E ns/openshift-authentication-operator pod/authentication-operator-77ff86b6b7-qmnrn node/ip-10-0-157-11.ec2.internal container=operator container exited with code 255 (Error): pgradeable"}]}}\nI0218 16:37:45.304696       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"a8e51245-7201-4843-9d9f-c4ea299caa1f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "OperatorSyncDegraded: Post https://172.30.0.1:443/apis/oauth.openshift.io/v1/oauthclients: stream error: stream ID 1835; INTERNAL_ERROR" to "OperatorSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (post oauthclients.oauth.openshift.io)"\nI0218 16:37:46.937963       1 status_controller.go:166] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-18T16:04:37Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-18T16:37:46Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-18T16:06:40Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-18T15:57:42Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0218 16:37:46.947007       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"a8e51245-7201-4843-9d9f-c4ea299caa1f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "OperatorSyncDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (post oauthclients.oauth.openshift.io)" to "",Progressing changed from True to False ("")\nI0218 16:38:24.516088       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 16:38:24.516220       1 leaderelection.go:66] leaderelection lost\n
Feb 18 16:38:27.248 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-5dffzmgbf node/ip-10-0-157-11.ec2.internal container=operator container exited with code 255 (Error): ithub.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0218 16:35:39.506525       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.ClusterOperator total 2 items received\nW0218 16:35:40.102619       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 34107 (34155)\nI0218 16:35:41.102965       1 reflector.go:158] Listing and watching *v1.ClusterOperator from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0218 16:35:42.713117       1 reflector.go:383] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Watch close - *v1.Proxy total 0 items received\nW0218 16:35:43.596188       1 reflector.go:299] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Proxy ended with: too old resource version: 23059 (34229)\nI0218 16:35:44.596557       1 reflector.go:158] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0218 16:36:04.480662       1 httplog.go:90] GET /metrics: (7.137382ms) 200 [Prometheus/2.14.0 10.129.2.20:41842]\nI0218 16:36:34.479221       1 httplog.go:90] GET /metrics: (5.742963ms) 200 [Prometheus/2.14.0 10.129.2.20:41842]\nI0218 16:37:04.499916       1 httplog.go:90] GET /metrics: (26.296715ms) 200 [Prometheus/2.14.0 10.129.2.20:41842]\nI0218 16:37:34.479518       1 httplog.go:90] GET /metrics: (5.983209ms) 200 [Prometheus/2.14.0 10.129.2.20:41842]\nI0218 16:38:04.505371       1 httplog.go:90] GET /metrics: (31.39497ms) 200 [Prometheus/2.14.0 10.129.2.20:41842]\nI0218 16:38:15.103776       1 httplog.go:90] GET /metrics: (5.312824ms) 200 [Prometheus/2.14.0 10.131.0.23:34022]\nI0218 16:38:24.903002       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 16:38:24.903228       1 leaderelection.go:66] leaderelection lost\n
Feb 18 16:38:29.502 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7556cb9cb4-ct89q node/ip-10-0-157-11.ec2.internal container=kube-scheduler-operator-container container exited with code 255 (Error): sDegraded: nodes/ip-10-0-134-245.ec2.internal pods/openshift-kube-scheduler-ip-10-0-134-245.ec2.internal container=\"scheduler\" is not ready"\nI0218 16:38:17.510563       1 status_controller.go:165] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2020-02-18T16:01:41Z","message":"NodeControllerDegraded: All master node(s) are ready\nStaticPodsDegraded: nodes/ip-10-0-134-245.ec2.internal pods/openshift-kube-scheduler-ip-10-0-134-245.ec2.internal container=\"scheduler\" is not ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-02-18T16:18:05Z","message":"Progressing: 3 nodes are at revision 6","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-02-18T15:56:04Z","message":"Available: 3 nodes are active; 3 nodes are at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-02-18T15:53:45Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0218 16:38:17.552545       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"9506fb25-d979-4cb8-99f5-051d7f74415d", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master node(s) \"ip-10-0-134-245.ec2.internal\" not ready\nStaticPodsDegraded: nodes/ip-10-0-134-245.ec2.internal pods/openshift-kube-scheduler-ip-10-0-134-245.ec2.internal container=\"scheduler\" is not ready" to "NodeControllerDegraded: All master node(s) are ready\nStaticPodsDegraded: nodes/ip-10-0-134-245.ec2.internal pods/openshift-kube-scheduler-ip-10-0-134-245.ec2.internal container=\"scheduler\" is not ready"\nI0218 16:38:28.305460       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0218 16:38:28.305603       1 leaderelection.go:66] leaderelection lost\n
Feb 18 16:38:32.870 E ns/openshift-operator-lifecycle-manager pod/packageserver-79fd967559-dct7l node/ip-10-0-157-11.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:38:46.648 E kube-apiserver failed contacting the API: Get https://api.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=37114&timeout=9m24s&timeoutSeconds=564&watch=true: dial tcp 35.171.107.199:6443: connect: connection refused
Feb 18 16:39:00.063 E ns/openshift-cluster-node-tuning-operator pod/cluster-node-tuning-operator-7b94648485-6nlml node/ip-10-0-134-245.ec2.internal container=cluster-node-tuning-operator container exited with code 255 (Error): I0218 16:38:59.295449       1 main.go:27] Go Version: go1.12.9\nI0218 16:38:59.295893       1 main.go:28] Go OS/Arch: linux/amd64\nI0218 16:38:59.295945       1 main.go:29] node-tuning Version: 769ba5c-dirty\nI0218 16:38:59.295988       1 main.go:45] Operator namespace: openshift-cluster-node-tuning-operator\nI0218 16:38:59.296586       1 leader.go:46] Trying to become the leader.\nF0218 16:38:59.300760       1 main.go:58] Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused\n
Feb 18 16:39:15.553 E openshift-apiserver OpenShift API is not responding to GET requests
Feb 18 16:39:23.237 E ns/openshift-cluster-node-tuning-operator pod/tuned-49wfl node/ip-10-0-146-83.ec2.internal container=tuned container exited with code 143 (Error): lib/tuned/ocp-pod-labels.cfg\nI0218 16:38:17.892773    2821 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:38:18.019097    2821 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:38:20.779026    2821 openshift-tuned.go:550] Pod (openshift-marketplace/certified-operators-5f876f7cc4-vg4v4) labels changed node wide: true\nI0218 16:38:22.885869    2821 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:38:22.889847    2821 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:38:23.074958    2821 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:38:25.040786    2821 openshift-tuned.go:550] Pod (openshift-marketplace/community-operators-557f76db96-z9fxv) labels changed node wide: true\nI0218 16:38:27.885837    2821 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:38:27.892354    2821 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:38:28.025330    2821 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:38:30.780577    2821 openshift-tuned.go:550] Pod (openshift-monitoring/thanos-querier-84f4bdf47f-p544p) labels changed node wide: true\nI0218 16:38:32.885846    2821 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:38:32.887664    2821 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:38:33.004857    2821 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:38:46.582219    2821 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0218 16:38:46.582758    2821 openshift-tuned.go:881] Pod event watch channel closed.\nI0218 16:38:46.582774    2821 openshift-tuned.go:883] Increasing resyncPeriod to 232\n
Feb 18 16:39:24.616 E ns/openshift-cluster-node-tuning-operator pod/tuned-lt42q node/ip-10-0-141-138.ec2.internal container=tuned container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:39:29.670 E ns/openshift-cluster-node-tuning-operator pod/tuned-p5rl9 node/ip-10-0-139-148.ec2.internal container=tuned container exited with code 143 (Error): in_sysctl: reapplying system sysctl\n2020-02-18 16:33:28,954 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0218 16:33:35.347359   49758 openshift-tuned.go:550] Pod (openshift-cluster-node-tuning-operator/tuned-pq72f) labels changed node wide: false\nI0218 16:35:39.042647   49758 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-deployment-upgrade-7635/dp-657fc4b57d-9ft76) labels changed node wide: true\nI0218 16:35:43.538405   49758 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:35:43.540552   49758 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:35:43.660329   49758 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:35:43.712654   49758 openshift-tuned.go:550] Pod (openshift-marketplace/community-operators-557f76db96-v9klt) labels changed node wide: true\nI0218 16:35:48.538391   49758 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:35:48.541079   49758 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:35:48.684632   49758 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:36:05.351522   49758 openshift-tuned.go:550] Pod (openshift-monitoring/telemeter-client-7dd6dcd44c-55m4m) labels changed node wide: true\nI0218 16:36:08.538372   49758 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:36:08.540054   49758 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:36:08.689698   49758 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:36:10.104968   49758 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-5468/foo-xh4kc) labels changed node wide: false\nI0218 16:36:12.101536   49758 openshift-tuned.go:550] Pod (e2e-k8s-sig-apps-job-upgrade-5468/foo-zf8sw) labels changed node wide: true\n
Feb 18 16:39:32.779 E ns/openshift-operator-lifecycle-manager pod/packageserver-79fd967559-rcxnc node/ip-10-0-134-245.ec2.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Feb 18 16:39:43.264 E clusteroperator/monitoring changed Degraded to True: UpdatingconfigurationsharingFailed: Failed to rollout the stack. Error: running task Updating configuration sharing failed: failed to retrieve Grafana host: getting Route object failed: the server is currently unable to handle the request (get routes.route.openshift.io grafana)
Feb 18 16:40:51.481 E ns/openshift-monitoring pod/node-exporter-6l5hd node/ip-10-0-142-223.ec2.internal container=node-exporter container exited with code 143 (Error): 2-18T16:19:40Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T16:19:40Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 16:40:51.508 E ns/openshift-multus pod/multus-4pddb node/ip-10-0-142-223.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 16:40:51.549 E ns/openshift-machine-config-operator pod/machine-config-daemon-lrcpm node/ip-10-0-142-223.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 16:40:51.563 E ns/openshift-cluster-node-tuning-operator pod/tuned-6mcp4 node/ip-10-0-142-223.ec2.internal container=tuned container exited with code 143 (Error): :31.628052   46900 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:37:31.743706   46900 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:38:20.654209   46900 openshift-tuned.go:550] Pod (openshift-ingress/router-default-6c4bc9fbb6-kwcmj) labels changed node wide: true\nI0218 16:38:21.624974   46900 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:38:21.626764   46900 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:38:21.746305   46900 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:38:22.652915   46900 openshift-tuned.go:550] Pod (openshift-monitoring/prometheus-k8s-0) labels changed node wide: true\nI0218 16:38:26.625014   46900 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:38:26.627522   46900 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:38:26.745220   46900 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:38:32.600037   46900 openshift-tuned.go:550] Pod (openshift-monitoring/alertmanager-main-1) labels changed node wide: true\nI0218 16:38:36.624979   46900 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:38:36.626409   46900 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:38:36.743727   46900 openshift-tuned.go:638] Active and recommended profile (openshift-node) match.  Label changes will not trigger profile reload.\nI0218 16:38:46.583653   46900 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0218 16:38:46.588300   46900 openshift-tuned.go:881] Pod event watch channel closed.\nI0218 16:38:46.588321   46900 openshift-tuned.go:883] Increasing resyncPeriod to 208\nI0218 16:38:53.267493   46900 openshift-tuned.go:137] Received signal: terminated\n
Feb 18 16:40:54.229 E ns/openshift-multus pod/multus-4pddb node/ip-10-0-142-223.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 18 16:41:00.250 E ns/openshift-machine-config-operator pod/machine-config-daemon-lrcpm node/ip-10-0-142-223.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 16:41:04.638 E ns/openshift-controller-manager pod/controller-manager-v9888 node/ip-10-0-157-11.ec2.internal container=controller-manager container exited with code 1 (Error): 
Feb 18 16:41:04.662 E ns/openshift-monitoring pod/node-exporter-k8psv node/ip-10-0-157-11.ec2.internal container=node-exporter container exited with code 143 (Error): 2-18T16:20:29Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - hwmon" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-02-18T16:20:29Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Feb 18 16:41:04.705 E ns/openshift-sdn pod/sdn-controller-l5g8s node/ip-10-0-157-11.ec2.internal container=sdn-controller container exited with code 2 (Error): I0218 16:21:25.436510       1 leaderelection.go:241] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0218 16:32:43.709075       1 leaderelection.go:330] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-v4nyst74-b230b.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.155.57:6443: connect: connection refused\n
Feb 18 16:41:04.722 E ns/openshift-multus pod/multus-admission-controller-w8hkv node/ip-10-0-157-11.ec2.internal container=multus-admission-controller container exited with code 255 (Error): 
Feb 18 16:41:04.744 E ns/openshift-sdn pod/ovs-blpjf node/ip-10-0-157-11.ec2.internal container=openvswitch container exited with code 143 (Error): .095Z|00258|bridge|INFO|bridge br0: deleted interface veth41e75230 on port 22\n2020-02-18T16:38:30.294Z|00259|connmgr|INFO|br0<->unix#997: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T16:38:30.339Z|00260|connmgr|INFO|br0<->unix#1000: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T16:38:30.363Z|00261|bridge|INFO|bridge br0: deleted interface veth33a5af4e on port 25\n2020-02-18T16:38:30.621Z|00262|connmgr|INFO|br0<->unix#1003: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T16:38:30.689Z|00263|connmgr|INFO|br0<->unix#1007: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T16:38:30.725Z|00027|jsonrpc|WARN|unix#883: send error: Broken pipe\n2020-02-18T16:38:30.725Z|00028|reconnect|WARN|unix#883: connection dropped (Broken pipe)\n2020-02-18T16:38:30.763Z|00264|bridge|INFO|bridge br0: deleted interface veth47014479 on port 14\n2020-02-18T16:38:31.179Z|00265|connmgr|INFO|br0<->unix#1010: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T16:38:31.223Z|00266|connmgr|INFO|br0<->unix#1013: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T16:38:31.263Z|00267|bridge|INFO|bridge br0: deleted interface veth4b2fcdca on port 29\n2020-02-18T16:38:31.461Z|00268|connmgr|INFO|br0<->unix#1016: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T16:38:31.506Z|00269|connmgr|INFO|br0<->unix#1019: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T16:38:31.565Z|00270|bridge|INFO|bridge br0: deleted interface veth25cc3fdd on port 9\n2020-02-18T16:38:31.604Z|00271|connmgr|INFO|br0<->unix#1022: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T16:38:31.652Z|00272|connmgr|INFO|br0<->unix#1025: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T16:38:31.678Z|00273|bridge|INFO|bridge br0: deleted interface vethc48336e6 on port 30\n2020-02-18T16:38:31.732Z|00274|connmgr|INFO|br0<->unix#1028: 2 flow_mods in the last 0 s (2 deletes)\n2020-02-18T16:38:31.791Z|00275|connmgr|INFO|br0<->unix#1031: 4 flow_mods in the last 0 s (4 deletes)\n2020-02-18T16:38:31.834Z|00276|bridge|INFO|bridge br0: deleted interface veth332b3879 on port 32\nTerminated\n
Feb 18 16:41:04.759 E ns/openshift-multus pod/multus-nsstf node/ip-10-0-157-11.ec2.internal container=kube-multus container exited with code 143 (Error): 
Feb 18 16:41:04.792 E ns/openshift-machine-config-operator pod/machine-config-daemon-8vdtg node/ip-10-0-157-11.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Feb 18 16:41:04.847 E ns/openshift-machine-config-operator pod/machine-config-server-mfqlw node/ip-10-0-157-11.ec2.internal container=machine-config-server container exited with code 2 (Error): I0218 16:32:21.649963       1 start.go:38] Version: machine-config-daemon-4.3.0-201910280117-148-g5c8eedda-dirty (5c8eeddacb4c95bbd7f95f89821208d9a1f82a2f)\nI0218 16:32:21.651478       1 api.go:51] Launching server on :22624\nI0218 16:32:21.651532       1 api.go:51] Launching server on :22623\n
Feb 18 16:41:04.865 E ns/openshift-cluster-node-tuning-operator pod/tuned-9jj5v node/ip-10-0-157-11.ec2.internal container=tuned container exited with code 143 (Error): h.  Label changes will not trigger profile reload.\nI0218 16:38:34.472872     456 openshift-tuned.go:550] Pod (openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator-5dffzmgbf) labels changed node wide: true\nI0218 16:38:34.624974     456 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:38:34.626888     456 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:38:34.831270     456 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 16:38:34.832569     456 openshift-tuned.go:550] Pod (openshift-authentication/oauth-openshift-74f59686f8-dg6kf) labels changed node wide: true\nI0218 16:38:39.624983     456 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:38:39.627216     456 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:38:39.753920     456 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 16:38:44.460911     456 openshift-tuned.go:550] Pod (openshift-machine-config-operator/machine-config-controller-69fd997486-27962) labels changed node wide: true\nI0218 16:38:44.624988     456 openshift-tuned.go:408] Dumping labels to /var/lib/tuned/ocp-pod-labels.cfg\nI0218 16:38:44.626620     456 openshift-tuned.go:441] Getting recommended profile...\nI0218 16:38:44.765572     456 openshift-tuned.go:638] Active and recommended profile (openshift-control-plane) match.  Label changes will not trigger profile reload.\nI0218 16:38:44.766289     456 openshift-tuned.go:550] Pod (openshift-operator-lifecycle-manager/packageserver-79fd967559-dct7l) labels changed node wide: true\nI0218 16:38:46.215930     456 openshift-tuned.go:137] Received signal: terminated\nI0218 16:38:46.216060     456 openshift-tuned.go:304] Sending TERM to PID 685\n2020-02-18 16:38:46,216 INFO     tuned.daemon.controller: terminating controller\n
Feb 18 16:41:04.965 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-11.ec2.internal node/ip-10-0-157-11.ec2.internal container=kube-apiserver-7 container exited with code 1 (Error): ted\nE0218 16:38:45.993592       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:38:45.993622       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:38:45.993635       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:38:45.993599       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:38:45.993783       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:38:45.993833       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:38:45.993909       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:38:45.993911       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0218 16:38:45.995903       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0218 16:38:46.087765       1 controller.go:107] OpenAPI AggregationController: Processing item v1.oauth.openshift.io\nI0218 16:38:46.089927       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io\nE0218 16:38:46.102766       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist\nI0218 16:38:46.102802       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.\nI0218 16:38:46.212498       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-157-11.ec2.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0218 16:38:46.212643       1 controller.go:182] Shutting down kubernetes service endpoint reconciler\n
Feb 18 16:41:04.965 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-11.ec2.internal node/ip-10-0-157-11.ec2.internal container=kube-apiserver-insecure-readyz-7 container exited with code 2 (Error): I0218 16:18:14.060183       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Feb 18 16:41:04.965 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-157-11.ec2.internal node/ip-10-0-157-11.ec2.internal container=kube-apiserver-cert-syncer-7 container exited with code 2 (Error): network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 16:38:18.266390       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:38:18.285375       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0218 16:38:18.502327       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:38:18.502701       1 certsync_controller.go:179] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {localhost-recovery-serving-certkey false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Feb 18 16:41:05.056 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-157-11.ec2.internal node/ip-10-0-157-11.ec2.internal container=cluster-policy-controller-8 container exited with code 1 (Error): urce "operators.coreos.com/v2, Resource=catalogsourceconfigs", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheuses": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheuses"]\nI0218 16:37:00.965503       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0218 16:37:00.965514       1 policy_controller.go:147] Started Origin Controllers\nI0218 16:37:00.965537       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0218 16:37:00.965925       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0218 16:37:00.965989       1 controller_utils.go:1027] Waiting for caches to sync for cluster resource quota controller\nI0218 16:37:00.966329       1 resource_quota_monitor.go:301] QuotaMonitor running\nI0218 16:37:01.070189       1 controller_utils.go:1034] Caches are synced for resource quota controller\nI0218 16:37:01.130254       1 controller_utils.go:1034] Caches are synced for namespace-security-allocation-controller controller\nI0218 16:38:00.968465       1 trace.go:81] Trace[1058472562]: "Reflector github.com/openshift/client-go/route/informers/externalversions/factory.go:101 ListAndWatch" (started: 2020-02-18 16:37:00.965794324 +0000 UTC m=+1251.145472237) (total time: 1m0.002611304s):\nTrace[1058472562]: [1m0.002611304s] [1m0.002611304s] END\nE0218 16:38:00.968493       1 reflector.go:126] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: Failed to list *v1.Route: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io)\nE0218 16:38:01.132037       1 namespace_scc_allocation_controller.go:214] the server was unable to return a response in the time allotted, but may still be processing the request (get rangeallocations.security.openshift.io scc-uid)\nI0218 16:38:02.066353       1 controller_utils.go:1034] Caches are synced for cluster resource quota controller\n
Feb 18 16:41:05.056 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-157-11.ec2.internal node/ip-10-0-157-11.ec2.internal container=kube-controller-manager-cert-syncer-8 container exited with code 2 (Error):     1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:38:13.513616       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:38:13.514089       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:38:19.456402       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:38:19.456739       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:38:19.457166       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:38:19.457367       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:38:19.490876       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:38:19.491214       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:38:19.491688       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:38:19.491883       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:38:23.530760       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:38:23.535182       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:38:33.549281       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:38:33.549843       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\nI0218 16:38:43.565648       1 certsync_controller.go:82] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0218 16:38:43.566115       1 certsync_controller.go:179] Syncing secrets: [{csr-signer false}]\n
Feb 18 16:41:05.056 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-157-11.ec2.internal node/ip-10-0-157-11.ec2.internal container=kube-controller-manager-8 container exited with code 2 (Error): ic-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-02-18 15:34:50 +0000 UTC to 2020-02-19 15:34:50 +0000 UTC (now=2020-02-18 16:18:34.523113889 +0000 UTC))\nI0218 16:18:34.523540       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1582041222" (2020-02-18 15:53:54 +0000 UTC to 2022-02-17 15:53:55 +0000 UTC (now=2020-02-18 16:18:34.523516726 +0000 UTC))\nI0218 16:18:34.523917       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1582042714" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582042713" (2020-02-18 15:18:32 +0000 UTC to 2021-02-17 15:18:32 +0000 UTC (now=2020-02-18 16:18:34.523890645 +0000 UTC))\nI0218 16:18:34.524109       1 named_certificates.go:74] snimap["apiserver-loopback-client"]: "apiserver-loopback-client@1582042714" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1582042713" (2020-02-18 15:18:32 +0000 UTC to 2021-02-17 15:18:32 +0000 UTC (now=2020-02-18 16:18:34.524088292 +0000 UTC))\nI0218 16:18:34.524168       1 secure_serving.go:178] Serving securely on [::]:10257\nI0218 16:18:34.524226       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0218 16:18:34.525005       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\nI0218 16:18:34.525130       1 tlsconfig.go:241] Starting DynamicServingCertificateController\n
Feb 18 16:41:05.459 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-157-11.ec2.internal node/ip-10-0-157-11.ec2.internal container=scheduler container exited with code 2 (Error): und node resource: "Capacity: CPU<4>|Memory<16419376Ki>|Pods<250>|StorageEphemeral<125277164Ki>; Allocatable: CPU<3500m>|Memory<15804976Ki>|Pods<250>|StorageEphemeral<115455434152>.".\nI0218 16:38:34.896140       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-666c84877d-tw54g: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nE0218 16:38:34.913133       1 factory.go:585] pod is already present in the activeQ\nI0218 16:38:34.924080       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-666c84877d-tw54g: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0218 16:38:35.946862       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-666c84877d-tw54g: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0218 16:38:38.947705       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-666c84877d-tw54g: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\nI0218 16:38:44.461378       1 factory.go:545] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-666c84877d-tw54g: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules, 2 node(s) were unschedulable.; waiting\n
Feb 18 16:41:08.593 E ns/openshift-multus pod/multus-nsstf node/ip-10-0-157-11.ec2.internal invariant violation: pod may not transition Running->Pending
Feb 18 16:41:15.380 E ns/openshift-machine-config-operator pod/machine-config-daemon-8vdtg node/ip-10-0-157-11.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Feb 18 16:41:19.202 E clusteroperator/kube-controller-manager changed Degraded to True: NodeControllerDegradedMasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-157-11.ec2.internal" not ready since 2020-02-18 16:41:04 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)