ResultSUCCESS
Tests 1 failed / 26 succeeded
Started2020-05-15 18:23
Elapsed1h17m
Work namespaceci-op-g6f6ltbv
Refs openshift-4.5:21a72caa
48:024ac3bd
pod1a376efa-96d9-11ea-a959-0a580a81044f
repoopenshift/etcd
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 37m22s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
121 error level events were detected during this test run:

May 15 18:54:39.523 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 18:54:38.859214       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 18:54:38.859228       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 18:54:38.859237       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 18:54:38.859246       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 18:54:38.859254       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 18:54:38.859279       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 18:54:38.859299       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 18:54:38.859310       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 18:54:38.859320       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 18:54:38.859330       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 18:54:38.859337       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 18:54:38.859343       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 18:54:38.859349       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 18:54:38.859354       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 18:54:38.859378       1 server.go:681] external host was not specified, using 10.0.132.95\nI0515 18:54:38.859567       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 18:54:38.859792       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 18:55:11.671 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 18:55:10.922826       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 18:55:10.922834       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 18:55:10.922839       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 18:55:10.922845       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 18:55:10.922850       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 18:55:10.922857       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 18:55:10.922871       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 18:55:10.922880       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 18:55:10.922890       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 18:55:10.922899       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 18:55:10.922906       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 18:55:10.922913       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 18:55:10.922922       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 18:55:10.922931       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 18:55:10.922965       1 server.go:681] external host was not specified, using 10.0.132.95\nI0515 18:55:10.923181       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 18:55:10.923786       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 18:55:30.762 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/kube-controller-manager container exited with code 255 (Error): ent-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/ingress.operator.openshift.io/v1/dnsrecords?allowWatchBookmarks=true&resourceVersion=21439&timeout=8m1s&timeoutSeconds=481&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0515 18:55:30.116568       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0515 18:55:30.116671       1 controllermanager.go:291] leaderelection lost\nI0515 18:55:30.131031       1 deployment_controller.go:165] Shutting down deployment controller\nI0515 18:55:30.131046       1 gc_controller.go:100] Shutting down GC controller\nI0515 18:55:30.192702       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (21h9m59.050008982s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0515 18:55:30.131054       1 daemon_controller.go:282] Shutting down daemon sets controller\nI0515 18:55:30.131062       1 stateful_set.go:158] Shutting down statefulset controller\nI0515 18:55:30.192744       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (21h9m59.050008982s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0515 18:55:30.131075       1 certificate_controller.go:131] Shutting down certificate controller "csrapproving"\nI0515 18:55:30.131088       1 pvc_protection_controller.go:113] Shutting down PVC protection controller\nI0515 18:55:30.192802       1 reflector.go:181] Stopping reflector *v1.Endpoints (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0515 18:55:30.131096       1 certificate_controller.go:131] Shutting down certificate controller "csrsigning"\nI0515 18:55:30.131097       1 replica_set.go:193] Shutting down replicaset controller\nI0515 18:55:30.192845       1 reflector.go:181] Stopping reflector *v1.PartialObjectMetadata (21h9m59.050008982s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0515 18:55:30.131107       1 horizontal.go:180] Shutting down HPA controller\n
May 15 18:56:00.858 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/cluster-policy-controller container exited with code 255 (Error):   1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ClusterResourceQuota: Get https://localhost:6443/apis/quota.openshift.io/v1/clusterresourcequotas?allowWatchBookmarks=true&resourceVersion=17801&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0515 18:56:00.069439       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Build: Get https://localhost:6443/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=20719&timeout=9m31s&timeoutSeconds=571&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0515 18:56:00.071251       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1beta1.EndpointSlice: Get https://localhost:6443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=23109&timeout=9m27s&timeoutSeconds=567&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0515 18:56:00.073245       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Namespace: Get https://localhost:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=22787&timeout=7m49s&timeoutSeconds=469&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0515 18:56:00.128158       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0515 18:56:00.128355       1 policy_controller.go:94] leaderelection lost\nI0515 18:56:00.128222       1 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-132-95 stopped leading\nI0515 18:56:00.133194       1 reconciliation_controller.go:154] Shutting down ClusterQuotaReconcilationController\nI0515 18:56:00.133228       1 clusterquotamapping.go:142] Shutting down ClusterQuotaMappingController controller\nI0515 18:56:00.133261       1 resource_quota_controller.go:291] Shutting down resource quota controller\n
May 15 18:58:42.992 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-150-62.us-east-2.compute.internal node/ip-10-0-150-62.us-east-2.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0515 18:58:41.819590       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0515 18:58:41.822508       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0515 18:58:41.823274       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 15 18:58:50.141 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-62.us-east-2.compute.internal node/ip-10-0-150-62.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 18:58:48.409444       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 18:58:48.409457       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 18:58:48.409475       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 18:58:48.409484       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 18:58:48.409492       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 18:58:48.409501       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 18:58:48.409517       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 18:58:48.409527       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 18:58:48.409536       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 18:58:48.409546       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 18:58:48.409555       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 18:58:48.409565       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 18:58:48.409576       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 18:58:48.409586       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 18:58:48.409618       1 server.go:681] external host was not specified, using 10.0.150.62\nI0515 18:58:48.409848       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 18:58:48.410144       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 18:59:17.504 E kube-apiserver Kube API started failing: Get https://api.ci-op-g6f6ltbv-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
May 15 18:59:18.999 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-62.us-east-2.compute.internal node/ip-10-0-150-62.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 18:59:13.733200       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 18:59:13.733214       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 18:59:13.733243       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 18:59:13.733265       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 18:59:13.733275       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 18:59:13.733284       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 18:59:13.733301       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 18:59:13.733312       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 18:59:13.733325       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 18:59:13.733336       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 18:59:13.733347       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 18:59:13.733357       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 18:59:13.733368       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 18:59:13.733379       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 18:59:13.733413       1 server.go:681] external host was not specified, using 10.0.150.62\nI0515 18:59:13.733577       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 18:59:13.733874       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 18:59:36.412 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-62.us-east-2.compute.internal node/ip-10-0-150-62.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 18:59:35.655774       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 18:59:35.655783       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 18:59:35.655808       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 18:59:35.655826       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 18:59:35.655832       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 18:59:35.655838       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 18:59:35.655846       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 18:59:35.655852       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 18:59:35.655858       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 18:59:35.655865       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 18:59:35.655871       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 18:59:35.655877       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 18:59:35.655883       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 18:59:35.655889       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 18:59:35.655912       1 server.go:681] external host was not specified, using 10.0.150.62\nI0515 18:59:35.656056       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 18:59:35.656318       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 18:59:46.976 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0515 18:59:46.141792       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0515 18:59:46.152525       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0515 18:59:46.152536       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0515 18:59:46.153265       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 15 19:00:01.091 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0515 19:00:00.859964       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0515 19:00:00.862055       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0515 19:00:00.862115       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0515 19:00:00.863074       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 15 19:00:51.842 E ns/openshift-cluster-machine-approver pod/machine-approver-f5fff97fb-2cn48 node/ip-10-0-150-62.us-east-2.compute.internal container/machine-approver-controller container exited with code 2 (Error): e%3Dmachine-approver&resourceVersion=21708&timeoutSeconds=495&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0515 19:00:21.454116       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=21519&timeoutSeconds=457&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0515 19:00:22.453446       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=21708&timeoutSeconds=496&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0515 19:00:22.455196       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=21519&timeoutSeconds=514&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0515 19:00:23.454307       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=21708&timeoutSeconds=490&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0515 19:00:23.455786       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=21519&timeoutSeconds=303&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
May 15 19:00:52.858 E ns/openshift-kube-storage-version-migrator pod/migrator-c47697f94-8pb5k node/ip-10-0-158-250.us-east-2.compute.internal container/migrator container exited with code 2 (Error): I0515 18:55:20.033953       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
May 15 19:01:09.068 E ns/openshift-monitoring pod/node-exporter-gv28c node/ip-10-0-150-62.us-east-2.compute.internal container/node-exporter container exited with code 143 (Error): porter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-05-15T18:44:00Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\ntime="2020-05-15T18:59:27Z" level=error msg="ERROR: netclass collector failed after 0.059262s: could not get net class info: error obtaining net class info: could not access file phys_port_id: no such device" source="collector.go:132"\n
May 15 19:01:20.042 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 19:01:17.584039       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 19:01:17.584077       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 19:01:17.584113       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 19:01:17.584267       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 19:01:17.584326       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 19:01:17.584370       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 19:01:17.584436       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 19:01:17.584482       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 19:01:17.584525       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 19:01:17.584568       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 19:01:17.584607       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 19:01:17.584640       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 19:01:17.584678       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 19:01:17.584716       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 19:01:17.584791       1 server.go:681] external host was not specified, using 10.0.132.95\nI0515 19:01:17.585106       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 19:01:17.585442       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 19:01:20.572 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-78.us-east-2.compute.internal node/ip-10-0-137-78.us-east-2.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0515 19:01:19.695684       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0515 19:01:19.723568       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0515 19:01:19.724445       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 15 19:01:33.986 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-158-250.us-east-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/05/15 18:49:42 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/05/15 18:54:19 config map updated\n2020/05/15 18:54:19 successfully triggered reload\n
May 15 19:01:33.986 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-158-250.us-east-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/05/15 18:49:43 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/05/15 18:49:43 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/15 18:49:43 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/15 18:49:43 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/05/15 18:49:43 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/15 18:49:43 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/05/15 18:49:43 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/15 18:49:43 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0515 18:49:43.524458       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/15 18:49:43 http.go:107: HTTPS: listening on [::]:9091\n2020/05/15 18:50:24 oauthproxy.go:774: basicauth: 10.131.0.14:38726 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 18:54:55 oauthproxy.go:774: basicauth: 10.131.0.14:43868 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 18:59:25 oauthproxy.go:774: basicauth: 10.131.0.14:49220 Authorization header does not start with 'Basic', skipping basic authentication\n
May 15 19:01:33.986 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-158-250.us-east-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-05-15T18:49:42.719621287Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-05-15T18:49:42.721151667Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-05-15T18:49:47.870857582Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-15T18:49:47.870940566Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-05-15T18:49:48.057419935Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-15T18:52:47.982093187Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-15T18:55:48.008159108Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
May 15 19:01:34.540 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-128-178.us-east-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/05/15 18:49:32 Watching directory: "/etc/alertmanager/config"\n
May 15 19:01:34.540 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-128-178.us-east-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/05/15 18:49:32 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/15 18:49:32 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/15 18:49:32 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/15 18:49:33 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/05/15 18:49:33 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/15 18:49:33 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/15 18:49:33 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0515 18:49:33.040641       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/15 18:49:33 http.go:107: HTTPS: listening on [::]:9095\n
May 15 19:01:37.025 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-78.us-east-2.compute.internal node/ip-10-0-137-78.us-east-2.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0515 19:01:36.299247       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0515 19:01:36.303055       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0515 19:01:36.303159       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0515 19:01:36.307586       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 15 19:01:38.130 E ns/openshift-monitoring pod/openshift-state-metrics-7578b89585-qbkl2 node/ip-10-0-158-250.us-east-2.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
May 15 19:01:49.362 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 19:01:49.087139       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 19:01:49.087214       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 19:01:49.087263       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 19:01:49.087318       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 19:01:49.087361       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 19:01:49.087408       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 19:01:49.087456       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 19:01:49.087501       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 19:01:49.087545       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 19:01:49.087588       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 19:01:49.087627       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 19:01:49.087666       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 19:01:49.087710       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 19:01:49.087756       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 19:01:49.087824       1 server.go:681] external host was not specified, using 10.0.132.95\nI0515 19:01:49.088069       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 19:01:49.088399       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 19:02:16.452 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-158-250.us-east-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-15T19:01:48.206Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-15T19:01:48.211Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-15T19:01:48.211Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-15T19:01:48.212Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-15T19:01:48.212Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-15T19:01:48.213Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-15T19:01:48.213Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-15T19:01:48.213Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-15T19:01:48.213Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-15T19:01:48.213Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-15T19:01:48.213Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-15T19:01:48.213Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-15T19:01:48.213Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-15T19:01:48.213Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-15T19:01:48.213Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-15T19:01:48.214Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-15
May 15 19:02:19.638 E ns/openshift-monitoring pod/node-exporter-4ctfw node/ip-10-0-128-178.us-east-2.compute.internal container/node-exporter container exited with code 143 (Error): -15T18:48:06Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-05-15T18:48:06Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
May 15 19:02:21.698 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 19:02:20.853257       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 19:02:20.853265       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 19:02:20.853270       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 19:02:20.853276       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 19:02:20.853282       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 19:02:20.853290       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 19:02:20.853298       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 19:02:20.853304       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 19:02:20.853311       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 19:02:20.853317       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 19:02:20.853323       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 19:02:20.853328       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 19:02:20.853334       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 19:02:20.853340       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 19:02:20.853363       1 server.go:681] external host was not specified, using 10.0.132.95\nI0515 19:02:20.853512       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 19:02:20.853734       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 19:02:33.817 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/kube-controller-manager container exited with code 255 (Error): raints?allowWatchBookmarks=true&resourceVersion=27427&timeout=8m46s&timeoutSeconds=526&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0515 19:02:32.397163       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/networks?allowWatchBookmarks=true&resourceVersion=29079&timeout=7m26s&timeoutSeconds=446&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0515 19:02:32.662956       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0515 19:02:32.663073       1 controllermanager.go:291] leaderelection lost\nI0515 19:02:32.675560       1 certificate_controller.go:131] Shutting down certificate controller "csrapproving"\nI0515 19:02:32.675585       1 clusterroleaggregation_controller.go:161] Shutting down ClusterRoleAggregator\nI0515 19:02:32.675608       1 expand_controller.go:331] Shutting down expand controller\nI0515 19:02:32.675698       1 namespace_controller.go:212] Shutting down namespace controller\nI0515 19:02:32.675713       1 controller.go:222] Shutting down service controller\nI0515 19:02:32.675836       1 cleaner.go:90] Shutting down CSR cleaner controller\nI0515 19:02:32.675847       1 cronjob_controller.go:101] Shutting down CronJob Manager\nI0515 19:02:32.675863       1 dynamic_serving_content.go:145] Shutting down csr-controller::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.crt::/etc/kubernetes/static-pod-certs/secrets/csr-signer/tls.key\nI0515 19:02:32.675868       1 horizontal.go:180] Shutting down HPA controller\nI0515 19:02:32.675874       1 tokens_controller.go:189] Shutting down\nI0515 19:02:32.741968       1 horizontal.go:215] horizontal pod autoscaler controller worker shutting down\nI0515 19:02:32.675880       1 daemon_controller.go:282] Shutting down daemon sets controller\nI0515 19:02:32.675882       1 pvc_protection_controller.go:113] Shutting down PVC protection controller\n
May 15 19:02:40.502 E ns/openshift-marketplace pod/redhat-marketplace-58c7b65977-bhphz node/ip-10-0-158-250.us-east-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
May 15 19:02:42.500 E ns/openshift-marketplace pod/redhat-operators-f959b99c5-zlxp9 node/ip-10-0-158-250.us-east-2.compute.internal container/redhat-operators container exited with code 2 (Error): 
May 15 19:02:43.584 E ns/openshift-marketplace pod/certified-operators-599bd66bf6-2d7c7 node/ip-10-0-158-250.us-east-2.compute.internal container/certified-operators container exited with code 2 (Error): 
May 15 19:02:44.519 E ns/openshift-marketplace pod/community-operators-6d9b4b5885-8vlg7 node/ip-10-0-158-250.us-east-2.compute.internal container/community-operators container exited with code 2 (Error): 
May 15 19:02:52.749 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-63.us-east-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-15T19:02:48.544Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-15T19:02:48.549Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-15T19:02:48.549Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-15T19:02:48.550Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-15T19:02:48.550Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-15T19:02:48.551Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-15T19:02:48.551Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-15
May 15 19:02:58.922 E ns/openshift-monitoring pod/node-exporter-lf9nk node/ip-10-0-137-78.us-east-2.compute.internal container/node-exporter container exited with code 143 (Error): -15T18:43:57Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-05-15T18:43:57Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
May 15 19:03:21.051 E ns/openshift-console pod/console-768d55745c-fjc8f node/ip-10-0-137-78.us-east-2.compute.internal container/console container exited with code 2 (Error): 15T18:49:16Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:49:26Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:49:36Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:49:46Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:49:56Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:06Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:16Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:26Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:36Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:46Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:56Z cmd/main: Binding to [::]:8443...\n2020-05-15T18:50:56Z cmd/main: using TLS\n
May 15 19:03:22.100 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-132-95.us-east-2.compute.internal node/ip-10-0-132-95.us-east-2.compute.internal container/kube-scheduler container exited with code 255 (Error): t list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope\nE0515 19:03:19.046912       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope\nE0515 19:03:19.047051       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope\nE0515 19:03:19.047826       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"\nE0515 19:03:19.047951       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"\nE0515 19:03:19.048030       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope\nE0515 19:03:19.048662       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope\nE0515 19:03:21.599877       1 cache.go:513] Pod cb9f7b52-c977-48e3-9fc7-9ddb296124bc updated on a different node than previously added to.\nF0515 19:03:21.599910       1 cache.go:514] Schedulercache is corrupted and can badly affect scheduling decisions\n
May 15 19:03:29.089 E clusterversion/version changed Failing to True: WorkloadNotAvailable: could not find the deployment openshift-console-operator/console-operator during rollout
May 15 19:03:30.745 E ns/openshift-console pod/console-768d55745c-r8k48 node/ip-10-0-150-62.us-east-2.compute.internal container/console container exited with code 2 (Error): 15T18:49:16Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:49:26Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:49:36Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:49:46Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:49:56Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:06Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:17Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:27Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:37Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:47Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-15T18:50:57Z cmd/main: Binding to [::]:8443...\n2020-05-15T18:50:57Z cmd/main: using TLS\n
May 15 19:04:08.333 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-78.us-east-2.compute.internal node/ip-10-0-137-78.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 19:04:06.551204       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 19:04:06.551221       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 19:04:06.551239       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 19:04:06.551249       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 19:04:06.551257       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 19:04:06.551266       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 19:04:06.551283       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 19:04:06.551296       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 19:04:06.551307       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 19:04:06.551338       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 19:04:06.551349       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 19:04:06.551360       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 19:04:06.551370       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 19:04:06.551384       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 19:04:06.551424       1 server.go:681] external host was not specified, using 10.0.137.78\nI0515 19:04:06.551616       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 19:04:06.551898       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 19:04:28.392 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-78.us-east-2.compute.internal node/ip-10-0-137-78.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 19:04:28.187285       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 19:04:28.187300       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 19:04:28.187312       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 19:04:28.187323       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 19:04:28.187332       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 19:04:28.187341       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 19:04:28.187354       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 19:04:28.187365       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 19:04:28.187377       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 19:04:28.187388       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 19:04:28.187399       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 19:04:28.187410       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 19:04:28.187420       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 19:04:28.187431       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 19:04:28.187470       1 server.go:681] external host was not specified, using 10.0.137.78\nI0515 19:04:28.187645       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 19:04:28.187959       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 19:04:49.514 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-78.us-east-2.compute.internal node/ip-10-0-137-78.us-east-2.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0515 19:04:49.058367       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0515 19:04:49.058376       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0515 19:04:49.058381       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0515 19:04:49.058387       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0515 19:04:49.058393       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0515 19:04:49.058399       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0515 19:04:49.058407       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0515 19:04:49.058415       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0515 19:04:49.058423       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0515 19:04:49.058433       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0515 19:04:49.058444       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0515 19:04:49.058453       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0515 19:04:49.058461       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0515 19:04:49.058467       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0515 19:04:49.058492       1 server.go:681] external host was not specified, using 10.0.137.78\nI0515 19:04:49.058656       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0515 19:04:49.058931       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 15 19:05:24.630 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-78.us-east-2.compute.internal node/ip-10-0-137-78.us-east-2.compute.internal container/kube-controller-manager container exited with code 255 (Error): =true: dial tcp [::1]:6443: connect: connection refused\nE0515 19:05:23.816291       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/networks?allowWatchBookmarks=true&resourceVersion=21435&timeout=6m19s&timeoutSeconds=379&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0515 19:05:23.817380       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/projects?allowWatchBookmarks=true&resourceVersion=23359&timeout=5m28s&timeoutSeconds=328&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0515 19:05:23.818536       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/dnses?allowWatchBookmarks=true&resourceVersion=21415&timeout=8m2s&timeoutSeconds=482&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0515 19:05:23.819969       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/prometheuses?allowWatchBookmarks=true&resourceVersion=28285&timeout=5m44s&timeoutSeconds=344&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0515 19:05:23.821237       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.IngressClass: Get https://localhost:6443/apis/networking.k8s.io/v1beta1/ingressclasses?allowWatchBookmarks=true&resourceVersion=20500&timeout=5m45s&timeoutSeconds=345&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0515 19:05:24.374319       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0515 19:05:24.374464       1 controllermanager.go:291] leaderelection lost\n
May 15 19:06:44.603 E ns/openshift-sdn pod/sdn-controller-mc4fm node/ip-10-0-150-62.us-east-2.compute.internal container/sdn-controller container exited with code 2 (Error): 50] Created HostSubnet ip-10-0-140-63.us-east-2.compute.internal (host: "ip-10-0-140-63.us-east-2.compute.internal", ip: "10.0.140.63", subnet: "10.128.2.0/23")\nI0515 18:47:32.310702       1 subnets.go:150] Created HostSubnet ip-10-0-128-178.us-east-2.compute.internal (host: "ip-10-0-128-178.us-east-2.compute.internal", ip: "10.0.128.178", subnet: "10.129.2.0/23")\nI0515 18:54:35.162388       1 vnids.go:116] Allocated netid 4369762 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-6886"\nI0515 18:54:35.172389       1 vnids.go:116] Allocated netid 14726206 for namespace "e2e-frontend-ingress-available-4412"\nI0515 18:54:35.182826       1 vnids.go:116] Allocated netid 9882924 for namespace "e2e-kubernetes-api-available-5450"\nI0515 18:54:35.191865       1 vnids.go:116] Allocated netid 220206 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-9102"\nI0515 18:54:35.200139       1 vnids.go:116] Allocated netid 15565498 for namespace "e2e-k8s-service-lb-available-9962"\nI0515 18:54:35.208700       1 vnids.go:116] Allocated netid 15433848 for namespace "e2e-openshift-api-available-5093"\nI0515 18:54:35.219251       1 vnids.go:116] Allocated netid 16493244 for namespace "e2e-k8s-sig-apps-job-upgrade-6903"\nI0515 18:54:35.227549       1 vnids.go:116] Allocated netid 14260698 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-8661"\nI0515 18:54:35.234523       1 vnids.go:116] Allocated netid 4442983 for namespace "e2e-check-for-critical-alerts-2509"\nI0515 18:54:35.240110       1 vnids.go:116] Allocated netid 9155734 for namespace "e2e-k8s-sig-apps-deployment-upgrade-5070"\nI0515 18:54:35.248572       1 vnids.go:116] Allocated netid 5208918 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-3379"\nE0515 19:04:51.778799       1 leaderelection.go:356] Failed to update lock: Put https://api-int.ci-op-g6f6ltbv-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: read tcp 10.0.150.62:47246->10.0.128.98:6443: read: connection reset by peer\n
May 15 19:06:54.763 E ns/openshift-sdn pod/sdn-wqxl6 node/ip-10-0-150-62.us-east-2.compute.internal container/sdn container exited with code 255 (Error): 0.62:10257]\nI0515 19:05:39.655592    2276 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:05:39.656379    2276 proxier.go:349] userspace syncProxyRules took 50.423289ms\nI0515 19:05:43.790877    2276 roundrobin.go:267] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.132.95:6443 10.0.137.78:6443 10.0.150.62:6443]\nI0515 19:05:43.959854    2276 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:05:43.960459    2276 proxier.go:349] userspace syncProxyRules took 33.494127ms\nI0515 19:05:53.587917    2276 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.132.95:6443 10.0.137.78:6443 10.0.150.62:6443]\nI0515 19:05:53.764806    2276 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:05:53.765485    2276 proxier.go:349] userspace syncProxyRules took 38.610714ms\nI0515 19:06:41.811018    2276 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.3:6443 10.130.0.3:6443]\nI0515 19:06:41.811068    2276 roundrobin.go:217] Delete endpoint 10.128.0.23:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0515 19:06:41.811089    2276 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.3:8443 10.130.0.3:8443]\nI0515 19:06:41.811102    2276 roundrobin.go:217] Delete endpoint 10.128.0.23:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0515 19:06:42.040025    2276 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:06:42.040721    2276 proxier.go:349] userspace syncProxyRules took 36.550395ms\nI0515 19:06:45.955575    2276 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0515 19:06:54.337813    2276 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
May 15 19:07:06.938 E ns/openshift-sdn pod/sdn-controller-sp7wc node/ip-10-0-132-95.us-east-2.compute.internal container/sdn-controller container exited with code 2 (Error): I0515 18:38:42.978874       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0515 18:42:37.987814       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-g6f6ltbv-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
May 15 19:07:28.318 E ns/openshift-sdn pod/sdn-28nzl node/ip-10-0-128-178.us-east-2.compute.internal container/sdn container exited with code 255 (Error): 8443]\nI0515 19:06:41.811179    2364 roundrobin.go:217] Delete endpoint 10.128.0.23:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0515 19:06:41.978204    2364 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:06:41.978665    2364 proxier.go:349] userspace syncProxyRules took 29.191158ms\nI0515 19:07:23.155761    2364 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.83:6443 10.129.0.3:6443 10.130.0.3:6443]\nI0515 19:07:23.155798    2364 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.129.0.3:8443 10.130.0.3:8443]\nI0515 19:07:23.167997    2364 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.83:6443 10.129.0.3:6443]\nI0515 19:07:23.168035    2364 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0515 19:07:23.168053    2364 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.129.0.3:8443]\nI0515 19:07:23.168065    2364 roundrobin.go:217] Delete endpoint 10.130.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0515 19:07:23.289041    2364 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:07:23.289432    2364 proxier.go:349] userspace syncProxyRules took 29.118369ms\nI0515 19:07:23.423187    2364 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:07:23.423662    2364 proxier.go:349] userspace syncProxyRules took 28.280485ms\nI0515 19:07:24.791343    2364 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nF0515 19:07:27.600354    2364 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
May 15 19:07:54.323 E ns/openshift-multus pod/multus-admission-controller-lhkfr node/ip-10-0-137-78.us-east-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
May 15 19:07:55.352 E ns/openshift-sdn pod/sdn-nljbs node/ip-10-0-137-78.us-east-2.compute.internal container/sdn container exited with code 255 (Error): 3 10.130.0.3:8443]\nI0515 19:07:23.167996   85240 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.83:6443 10.129.0.3:6443]\nI0515 19:07:23.168038   85240 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0515 19:07:23.168059   85240 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.129.0.3:8443]\nI0515 19:07:23.168071   85240 roundrobin.go:217] Delete endpoint 10.130.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0515 19:07:23.379859   85240 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:07:23.380495   85240 proxier.go:349] userspace syncProxyRules took 32.555241ms\nI0515 19:07:23.533792   85240 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:07:23.534720   85240 proxier.go:349] userspace syncProxyRules took 33.861472ms\nI0515 19:07:51.667511   85240 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0515 19:07:53.449230   85240 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nE0515 19:07:53.449260   85240 pod.go:233] Error updating OVS multicast flows for VNID 1145367: exit status 1\nI0515 19:07:53.453412   85240 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0515 19:07:53.456836   85240 pod.go:540] CNI_DEL openshift-multus/multus-admission-controller-lhkfr\nI0515 19:07:54.935177   85240 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0515 19:07:54.935350   85240 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
May 15 19:07:56.308 E ns/openshift-multus pod/multus-5rvdf node/ip-10-0-150-62.us-east-2.compute.internal container/kube-multus container exited with code 137 (Error): 
May 15 19:08:22.318 E ns/openshift-sdn pod/sdn-wmldw node/ip-10-0-158-250.us-east-2.compute.internal container/sdn container exited with code 255 (Error): 0515 19:07:23.166209   82830 roundrobin.go:217] Delete endpoint 10.130.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0515 19:07:23.300925   82830 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:07:23.301439   82830 proxier.go:349] userspace syncProxyRules took 31.858249ms\nI0515 19:07:23.429771   82830 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:07:23.430222   82830 proxier.go:349] userspace syncProxyRules took 27.733738ms\nI0515 19:08:08.383132   82830 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.83:6443 10.129.0.3:6443 10.130.0.62:6443]\nI0515 19:08:08.383183   82830 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.129.0.3:8443 10.130.0.62:8443]\nI0515 19:08:08.405198   82830 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.83:6443 10.130.0.62:6443]\nI0515 19:08:08.405235   82830 roundrobin.go:217] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0515 19:08:08.405254   82830 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.130.0.62:8443]\nI0515 19:08:08.405266   82830 roundrobin.go:217] Delete endpoint 10.129.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0515 19:08:08.527197   82830 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:08:08.527660   82830 proxier.go:349] userspace syncProxyRules took 28.649838ms\nI0515 19:08:08.680320   82830 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:08:08.680857   82830 proxier.go:349] userspace syncProxyRules took 28.881841ms\nF0515 19:08:21.992968   82830 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
May 15 19:08:39.461 E ns/openshift-multus pod/multus-admission-controller-qxjh2 node/ip-10-0-132-95.us-east-2.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
May 15 19:08:41.375 E ns/openshift-multus pod/multus-rb246 node/ip-10-0-158-250.us-east-2.compute.internal container/kube-multus container exited with code 137 (Error): 
May 15 19:08:54.709 E ns/openshift-sdn pod/sdn-smkft node/ip-10-0-140-63.us-east-2.compute.internal container/sdn container exited with code 255 (Error): 7] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.129.0.3:8443 10.130.0.62:8443]\nI0515 19:08:08.407071   57060 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.83:6443 10.130.0.62:6443]\nI0515 19:08:08.407109   57060 roundrobin.go:217] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0515 19:08:08.407129   57060 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.130.0.62:8443]\nI0515 19:08:08.407141   57060 roundrobin.go:217] Delete endpoint 10.129.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0515 19:08:08.521126   57060 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:08:08.521694   57060 proxier.go:349] userspace syncProxyRules took 28.719916ms\nI0515 19:08:08.678394   57060 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:08:08.678989   57060 proxier.go:349] userspace syncProxyRules took 44.331893ms\nI0515 19:08:47.039080   57060 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0515 19:08:48.486142   57060 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.83:6443 10.129.0.72:6443 10.130.0.62:6443]\nI0515 19:08:48.486198   57060 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.129.0.72:8443 10.130.0.62:8443]\nI0515 19:08:48.630027   57060 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:08:48.630649   57060 proxier.go:349] userspace syncProxyRules took 34.472607ms\nF0515 19:08:54.153683   57060 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
May 15 19:09:17.705 E ns/openshift-sdn pod/sdn-tj7wm node/ip-10-0-132-95.us-east-2.compute.internal container/sdn container exited with code 255 (Error): oints for openshift-multus/multus-admission-controller:webhook to [10.128.0.83:6443 10.130.0.62:6443]\nI0515 19:08:08.406435   91552 roundrobin.go:217] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0515 19:08:08.406557   91552 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.130.0.62:8443]\nI0515 19:08:08.406643   91552 roundrobin.go:217] Delete endpoint 10.129.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0515 19:08:08.566271   91552 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:08:08.566868   91552 proxier.go:349] userspace syncProxyRules took 33.541401ms\nI0515 19:08:08.721174   91552 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:08:08.721777   91552 proxier.go:349] userspace syncProxyRules took 32.311422ms\nI0515 19:08:38.754923   91552 pod.go:540] CNI_DEL openshift-multus/multus-admission-controller-qxjh2\nI0515 19:08:44.191270   91552 pod.go:504] CNI_ADD openshift-multus/multus-admission-controller-vwgdb got IP 10.129.0.72, ofport 73\nI0515 19:08:48.488114   91552 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.83:6443 10.129.0.72:6443 10.130.0.62:6443]\nI0515 19:08:48.488182   91552 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.129.0.72:8443 10.130.0.62:8443]\nI0515 19:08:48.641667   91552 proxier.go:370] userspace proxy: processing 0 service events\nI0515 19:08:48.643374   91552 proxier.go:349] userspace syncProxyRules took 32.561725ms\nI0515 19:09:16.818472   91552 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0515 19:09:16.818527   91552 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
May 15 19:10:26.509 E ns/openshift-multus pod/multus-r6b8m node/ip-10-0-128-178.us-east-2.compute.internal container/kube-multus container exited with code 137 (Error): 
May 15 19:11:10.097 E ns/openshift-multus pod/multus-8bnc7 node/ip-10-0-137-78.us-east-2.compute.internal container/kube-multus container exited with code 137 (Error): 
May 15 19:11:43.230 E ns/openshift-machine-config-operator pod/machine-config-operator-649699867f-pqb2l node/ip-10-0-150-62.us-east-2.compute.internal container/machine-config-operator container exited with code 2 (Error): g-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nI0515 18:39:21.950884       1 operator.go:265] Starting MachineConfigOperator\nI0515 18:39:21.958871       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"f0ba0e4f-bbc6-4ecc-b7eb-362a0046b7c5", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [] to [{operator 0.0.1-2020-05-15-182413}]\nI0515 18:39:23.820281       1 sync.go:62] [init mode] synced RenderConfig in 1.85222423s\nI0515 18:39:23.877083       1 sync.go:62] [init mode] synced MachineConfigPools in 48.672377ms\nI0515 18:40:51.143255       1 sync.go:62] [init mode] synced MachineConfigDaemon in 1m27.251927898s\nI0515 18:40:56.275600       1 sync.go:62] [init mode] synced MachineConfigController in 5.122914695s\nI0515 18:41:06.480382       1 sync.go:62] [init mode] synced RenderConfig in 132.555696ms\nI0515 18:41:06.875223       1 sync.go:62] [init mode] synced MachineConfigPools in 245.215292ms\nI0515 18:41:08.261890       1 sync.go:62] [init mode] synced MachineConfigDaemon in 1.239234652s\nI0515 18:41:10.407325       1 sync.go:62] [init mode] synced MachineConfigController in 2.134400699s\nI0515 18:41:31.535935       1 sync.go:62] [init mode] synced MachineConfigServer in 21.121134529s\nI0515 18:42:11.553559       1 sync.go:62] [init mode] synced RequiredPools in 40.013018734s\nI0515 18:42:11.613219       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"f0ba0e4f-bbc6-4ecc-b7eb-362a0046b7c5", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator version changed from [] to [{operator 0.0.1-2020-05-15-182413}]\nI0515 18:42:11.946505       1 sync.go:93] Initialization complete\n
May 15 19:13:41.032 E ns/openshift-machine-config-operator pod/machine-config-daemon-rh9xx node/ip-10-0-158-250.us-east-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
May 15 19:13:45.777 E ns/openshift-machine-config-operator pod/machine-config-daemon-nlvvx node/ip-10-0-150-62.us-east-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
May 15 19:13:51.692 E ns/openshift-machine-config-operator pod/machine-config-daemon-2cpfc node/ip-10-0-132-95.us-east-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
May 15 19:14:08.526 E ns/openshift-machine-config-operator pod/machine-config-daemon-ctd77 node/ip-10-0-140-63.us-east-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
May 15 19:14:24.954 E ns/openshift-machine-config-operator pod/machine-config-daemon-p9hqd node/ip-10-0-128-178.us-east-2.compute.internal container/oauth-proxy container exited with code 143 (Error): 
May 15 19:14:32.887 E ns/openshift-machine-config-operator pod/machine-config-controller-bf96ff78b-sq274 node/ip-10-0-137-78.us-east-2.compute.internal container/machine-config-controller container exited with code 2 (Error): g resource lock openshift-machine-config-operator/machine-config-controller: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config-controller: unexpected EOF\nI0515 18:48:56.476388       1 node_controller.go:453] Pool worker: node ip-10-0-158-250.us-east-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-ae80d08f24ed34d4f739ff3053d499c3\nI0515 18:48:56.476426       1 node_controller.go:453] Pool worker: node ip-10-0-158-250.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-ae80d08f24ed34d4f739ff3053d499c3\nI0515 18:48:56.476437       1 node_controller.go:453] Pool worker: node ip-10-0-158-250.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0515 18:49:07.929132       1 node_controller.go:453] Pool worker: node ip-10-0-140-63.us-east-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-ae80d08f24ed34d4f739ff3053d499c3\nI0515 18:49:07.929170       1 node_controller.go:453] Pool worker: node ip-10-0-140-63.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-ae80d08f24ed34d4f739ff3053d499c3\nI0515 18:49:07.929180       1 node_controller.go:453] Pool worker: node ip-10-0-140-63.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0515 18:49:08.545535       1 node_controller.go:453] Pool worker: node ip-10-0-128-178.us-east-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-ae80d08f24ed34d4f739ff3053d499c3\nI0515 18:49:08.545569       1 node_controller.go:453] Pool worker: node ip-10-0-128-178.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-ae80d08f24ed34d4f739ff3053d499c3\nI0515 18:49:08.545581       1 node_controller.go:453] Pool worker: node ip-10-0-128-178.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Done\n
May 15 19:16:32.357 E ns/openshift-machine-config-operator pod/machine-config-server-2vvbk node/ip-10-0-137-78.us-east-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0515 18:41:29.663985       1 start.go:38] Version: machine-config-daemon-4.5.0-202005142357-4-g461e3688-dirty (461e36888f63ee7f592207f022dbca9248b9f984)\nI0515 18:41:29.664774       1 api.go:56] Launching server on :22624\nI0515 18:41:29.664795       1 api.go:56] Launching server on :22623\nI0515 18:45:06.146734       1 api.go:102] Pool worker requested by 10.0.147.174:33235\n
May 15 19:16:44.407 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-63.us-east-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-15T19:02:48.544Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-15T19:02:48.549Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-15T19:02:48.549Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-15T19:02:48.550Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-15T19:02:48.550Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-15T19:02:48.550Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-15T19:02:48.551Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-15T19:02:48.551Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-15
May 15 19:16:44.407 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-63.us-east-2.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/05/15 19:02:51 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
May 15 19:16:44.407 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-63.us-east-2.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/05/15 19:02:52 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/05/15 19:02:52 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/15 19:02:52 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/15 19:02:52 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/05/15 19:02:52 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/15 19:02:52 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/05/15 19:02:52 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/15 19:02:52 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0515 19:02:52.114476       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/15 19:02:52 http.go:107: HTTPS: listening on [::]:9091\n2020/05/15 19:06:33 oauthproxy.go:774: basicauth: 10.128.2.18:53044 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:11:03 oauthproxy.go:774: basicauth: 10.128.2.18:55972 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:15:33 oauthproxy.go:774: basicauth: 10.128.2.18:58848 Authorization header does not start with 'Basic', skipping basic authentication\n
May 15 19:16:44.407 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-63.us-east-2.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-05-15T19:02:51.383594562Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-05-15T19:02:51.38558373Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-05-15T19:02:56.593898256Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-15T19:02:56.605832113Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
May 15 19:16:44.452 E ns/openshift-marketplace pod/redhat-marketplace-6d444bc-56n9g node/ip-10-0-140-63.us-east-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
May 15 19:16:44.531 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-6dcdcddf75-2b29k node/ip-10-0-140-63.us-east-2.compute.internal container/operator container exited with code 255 (Error): at 64.191881ms\nI0515 19:06:42.584707       1 operator.go:145] Starting syncing operator at 2020-05-15 19:06:42.584701566 +0000 UTC m=+324.382324304\nI0515 19:06:43.113901       1 operator.go:147] Finished syncing operator at 529.189445ms\nI0515 19:11:44.464672       1 operator.go:145] Starting syncing operator at 2020-05-15 19:11:44.464661129 +0000 UTC m=+626.262283958\nI0515 19:11:44.491286       1 operator.go:147] Finished syncing operator at 26.618368ms\nI0515 19:11:44.491334       1 operator.go:145] Starting syncing operator at 2020-05-15 19:11:44.491327514 +0000 UTC m=+626.288950359\nI0515 19:11:44.512599       1 operator.go:147] Finished syncing operator at 21.264435ms\nI0515 19:11:45.252233       1 operator.go:145] Starting syncing operator at 2020-05-15 19:11:45.252222044 +0000 UTC m=+627.049844802\nI0515 19:11:45.272575       1 operator.go:147] Finished syncing operator at 20.346674ms\nI0515 19:11:45.351071       1 operator.go:145] Starting syncing operator at 2020-05-15 19:11:45.351064921 +0000 UTC m=+627.148687605\nI0515 19:11:45.373881       1 operator.go:147] Finished syncing operator at 22.810016ms\nI0515 19:11:45.449773       1 operator.go:145] Starting syncing operator at 2020-05-15 19:11:45.449764458 +0000 UTC m=+627.247387221\nI0515 19:11:45.488398       1 operator.go:147] Finished syncing operator at 38.627521ms\nI0515 19:11:45.555367       1 operator.go:145] Starting syncing operator at 2020-05-15 19:11:45.555356747 +0000 UTC m=+627.352979668\nI0515 19:11:46.085385       1 operator.go:147] Finished syncing operator at 530.019268ms\nI0515 19:16:41.687070       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0515 19:16:41.687746       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0515 19:16:41.687764       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0515 19:16:41.687778       1 logging_controller.go:93] Shutting down LogLevelController\nF0515 19:16:41.687864       1 builder.go:243] stopped\n
May 15 19:16:44.558 E ns/openshift-marketplace pod/community-operators-756cbb45fb-vfkgx node/ip-10-0-140-63.us-east-2.compute.internal container/community-operators container exited with code 2 (Error): 
May 15 19:16:45.591 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-140-63.us-east-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/05/15 19:02:43 Watching directory: "/etc/alertmanager/config"\n
May 15 19:16:45.591 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-140-63.us-east-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/05/15 19:02:44 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/15 19:02:44 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/15 19:02:44 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/15 19:02:44 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/05/15 19:02:44 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/15 19:02:44 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/15 19:02:44 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0515 19:02:44.171633       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/15 19:02:44 http.go:107: HTTPS: listening on [::]:9095\n
May 15 19:16:45.696 E ns/openshift-monitoring pod/thanos-querier-796b58c66d-pd6ml node/ip-10-0-140-63.us-east-2.compute.internal container/oauth-proxy container exited with code 2 (Error): :disabled\n2020/05/15 19:02:03 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/05/15 19:02:03 http.go:107: HTTPS: listening on [::]:9091\nI0515 19:02:03.954381       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/15 19:03:27 oauthproxy.go:774: basicauth: 10.130.0.41:49254 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:04:27 oauthproxy.go:774: basicauth: 10.130.0.41:50078 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:05:27 oauthproxy.go:774: basicauth: 10.130.0.41:58514 Authorization header does not start with 'Basic', skipping basic authentication\nE0515 19:05:27.802951       1 webhook.go:109] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:thanos-querier" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\n2020/05/15 19:05:27 oauthproxy.go:782: requestauth: 10.130.0.41:58514 tokenreviews.authentication.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:thanos-querier" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\n2020/05/15 19:07:27 oauthproxy.go:774: basicauth: 10.130.0.41:33594 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:08:27 oauthproxy.go:774: basicauth: 10.130.0.41:34572 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:09:27 oauthproxy.go:774: basicauth: 10.130.0.41:35456 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:11:27 oauthproxy.go:774: basicauth: 10.130.0.41:37228 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:13:27 oauthproxy.go:774: basicauth: 10.130.0.41:39178 Authorization header does not start with 'Basic', skipping basic authentication\n
May 15 19:16:45.844 E ns/openshift-monitoring pod/prometheus-adapter-6b88c54b5f-gsk5t node/ip-10-0-140-63.us-east-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0515 19:02:02.297874       1 adapter.go:94] successfully using in-cluster auth\nI0515 19:02:03.075298       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0515 19:02:03.075306       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0515 19:02:03.075691       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0515 19:02:03.076336       1 secure_serving.go:178] Serving securely on [::]:6443\nI0515 19:02:03.076528       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
May 15 19:16:45.922 E ns/openshift-monitoring pod/openshift-state-metrics-7fb657774-t8nzl node/ip-10-0-140-63.us-east-2.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
May 15 19:16:46.015 E ns/openshift-monitoring pod/kube-state-metrics-68cffcbc57-c72lv node/ip-10-0-140-63.us-east-2.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
May 15 19:16:46.050 E ns/openshift-marketplace pod/redhat-operators-7bdfbf479f-m9kck node/ip-10-0-140-63.us-east-2.compute.internal container/redhat-operators container exited with code 2 (Error): 
May 15 19:16:46.091 E ns/openshift-kube-storage-version-migrator pod/migrator-8498dddb67-qggtx node/ip-10-0-140-63.us-east-2.compute.internal container/migrator container exited with code 2 (Error): 
May 15 19:16:46.140 E ns/openshift-marketplace pod/certified-operators-759bf95cb6-lgbrg node/ip-10-0-140-63.us-east-2.compute.internal container/certified-operators container exited with code 2 (Error): 
May 15 19:16:50.676 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-754bd4684f-5zkbf node/ip-10-0-137-78.us-east-2.compute.internal container/kube-storage-version-migrator-operator container exited with code 1 (Error): stTransitionTime":"2020-05-15T19:00:45Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-05-15T19:16:49Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-05-15T18:39:23Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0515 19:16:49.099462       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"6ba63d22-1c7a-41cb-ae4b-aef19f2fa073", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0515 19:16:49.131032       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0515 19:16:49.132068       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0515 19:16:49.132223       1 builder.go:248] server exited\nI0515 19:16:49.132440       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0515 19:16:49.132541       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0515 19:16:49.132946       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController\nI0515 19:16:49.133071       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from runtime/asm_amd64.s:1357\nI0515 19:16:49.133118       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nI0515 19:16:49.133142       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0515 19:16:49.133165       1 base_controller.go:101] Shutting down LoggingSyncer ...\nW0515 19:16:49.133230       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
May 15 19:16:51.581 E ns/openshift-machine-config-operator pod/machine-config-server-n28pr node/ip-10-0-150-62.us-east-2.compute.internal container/machine-config-server container exited with code 2 (Error): I0515 18:41:30.342713       1 start.go:38] Version: machine-config-daemon-4.5.0-202005142357-4-g461e3688-dirty (461e36888f63ee7f592207f022dbca9248b9f984)\nI0515 18:41:30.344952       1 api.go:56] Launching server on :22624\nI0515 18:41:30.345365       1 api.go:56] Launching server on :22623\nI0515 18:45:03.369619       1 api.go:102] Pool worker requested by 10.0.128.98:62373\n
May 15 19:16:53.543 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5c77658756-7xnnq node/ip-10-0-158-250.us-east-2.compute.internal container/snapshot-controller container exited with code 2 (Error): 
May 15 19:17:04.414 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-128-178.us-east-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-15T19:17:01.227Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-15T19:17:01.230Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-15T19:17:01.231Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-15T19:17:01.232Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-15T19:17:01.232Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-15T19:17:01.232Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-15T19:17:01.232Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-15T19:17:01.232Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-15T19:17:01.232Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-15T19:17:01.232Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-15T19:17:01.232Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-15T19:17:01.232Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-15T19:17:01.232Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-15T19:17:01.232Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-15T19:17:01.233Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-15T19:17:01.233Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-15
May 15 19:18:51.027 E ns/openshift-machine-config-operator pod/machine-config-daemon-f44wq node/ip-10-0-137-78.us-east-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
May 15 19:19:12.450 E ns/openshift-machine-api pod/machine-api-controllers-84b896766d-gh99t node/ip-10-0-150-62.us-east-2.compute.internal container/machineset-controller container exited with code 1 (Error): 
May 15 19:19:12.625 E ns/openshift-insights pod/insights-operator-846fc77856-7cpnv node/ip-10-0-150-62.us-east-2.compute.internal container/operator container exited with code 2 (Error): er.go:63] Recording config/infrastructure with fingerprint=\nI0515 19:17:10.819766       1 diskrecorder.go:63] Recording config/network with fingerprint=\nI0515 19:17:10.823616       1 diskrecorder.go:63] Recording config/authentication with fingerprint=\nI0515 19:17:10.829658       1 diskrecorder.go:63] Recording config/imageregistry with fingerprint=\nI0515 19:17:10.834137       1 diskrecorder.go:63] Recording config/featuregate with fingerprint=\nI0515 19:17:10.841451       1 diskrecorder.go:63] Recording config/oauth with fingerprint=\nI0515 19:17:10.847241       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0515 19:17:10.851189       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0515 19:17:10.878851       1 diskrecorder.go:170] Writing 58 records to /var/lib/insights-operator/insights-2020-05-15-191710.tar.gz\nI0515 19:17:10.887921       1 diskrecorder.go:134] Wrote 58 records to disk in 9ms\nI0515 19:17:10.887948       1 periodic.go:151] Periodic gather config completed in 1.48s\nI0515 19:17:10.887957       1 controllerstatus.go:40] name=periodic-config healthy=true reason= message=\nI0515 19:17:12.548415       1 httplog.go:90] GET /metrics: (9.394179ms) 200 [Prometheus/2.15.2 10.131.0.25:43978]\nI0515 19:17:16.663201       1 httplog.go:90] GET /metrics: (1.885462ms) 200 [Prometheus/2.15.2 10.129.2.25:48216]\nI0515 19:17:42.538906       1 httplog.go:90] GET /metrics: (7.936646ms) 200 [Prometheus/2.15.2 10.131.0.25:43978]\nI0515 19:17:46.654473       1 httplog.go:90] GET /metrics: (1.930019ms) 200 [Prometheus/2.15.2 10.129.2.25:48216]\nI0515 19:18:12.538427       1 httplog.go:90] GET /metrics: (7.361721ms) 200 [Prometheus/2.15.2 10.131.0.25:43978]\nI0515 19:18:16.654297       1 httplog.go:90] GET /metrics: (1.807348ms) 200 [Prometheus/2.15.2 10.129.2.25:48216]\nI0515 19:18:42.538058       1 httplog.go:90] GET /metrics: (7.058944ms) 200 [Prometheus/2.15.2 10.131.0.25:43978]\nI0515 19:18:46.672601       1 httplog.go:90] GET /metrics: (19.984035ms) 200 [Prometheus/2.15.2 10.129.2.25:48216]\n
May 15 19:19:14.200 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-754bd4684f-sh82j node/ip-10-0-150-62.us-east-2.compute.internal container/kube-storage-version-migrator-operator container exited with code 1 (Error): o:223] Waiting for caches to sync for LoggingSyncer\nI0515 19:17:04.140630       1 shared_informer.go:223] Waiting for caches to sync for StatusSyncer_kube-storage-version-migrator\nI0515 19:17:04.140750       1 controller.go:113] Starting KubeStorageVersionMigratorOperator\nI0515 19:17:04.185907       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file \nI0515 19:17:04.193203       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file \nI0515 19:17:04.240470       1 shared_informer.go:230] Caches are synced for LoggingSyncer \nI0515 19:17:04.240647       1 base_controller.go:54] Starting #1 worker of LoggingSyncer controller ...\nI0515 19:17:04.244081       1 shared_informer.go:230] Caches are synced for StatusSyncer_kube-storage-version-migrator \nI0515 19:17:04.244148       1 base_controller.go:54] Starting #1 worker of StatusSyncer_kube-storage-version-migrator controller ...\nI0515 19:19:10.026918       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0515 19:19:10.027655       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from runtime/asm_amd64.s:1357\nI0515 19:19:10.031325       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0515 19:19:10.031344       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nI0515 19:19:10.031378       1 base_controller.go:101] Shutting down LoggingSyncer ...\nI0515 19:19:10.031459       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from runtime/asm_amd64.s:1357\nI0515 19:19:10.031540       1 reflector.go:181] Stopping reflector *v1.Deployment (10m0s) from runtime/asm_amd64.s:1357\nI0515 19:19:10.031612       1 reflector.go:181] Stopping reflector *unstructured.Unstructured (12h0m0s) from runtime/asm_amd64.s:1357\nW0515 19:19:10.031659       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\n
May 15 19:19:14.540 E ns/openshift-machine-api pod/machine-api-operator-5654d4454-ncb7l node/ip-10-0-150-62.us-east-2.compute.internal container/machine-api-operator container exited with code 2 (Error): 
May 15 19:19:14.615 E ns/openshift-machine-config-operator pod/machine-config-operator-78575c7fc7-xw4ln node/ip-10-0-150-62.us-east-2.compute.internal container/machine-config-operator container exited with code 2 (Error): eport event: 'Normal' 'LeaderElection' 'machine-config-operator-78575c7fc7-xw4ln_d2d84916-2e6c-4049-8aca-ac34caf29321 became leader'\nI0515 19:13:39.156550       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0515 19:13:39.897409       1 operator.go:265] Starting MachineConfigOperator\nI0515 19:13:39.904168       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"f0ba0e4f-bbc6-4ecc-b7eb-362a0046b7c5", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-05-15-182413}] to [{operator 0.0.1-2020-05-15-182704}]\nW0515 19:17:26.408164       1 reflector.go:402] k8s.io/client-go/informers/factory.go:135: watch of *v1.ClusterRoleBinding ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\nW0515 19:17:26.408498       1 reflector.go:402] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Network ended with: very short watch: github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Unexpected watch close - watch lasted less than a second and no items received\nW0515 19:17:26.408850       1 reflector.go:402] k8s.io/client-go/informers/factory.go:135: watch of *v1.Deployment ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\nE0515 19:17:26.409477       1 reflector.go:380] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.ControllerConfig: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?allowWatchBookmarks=true&resourceVersion=38475&timeout=6m30s&timeoutSeconds=390&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n
May 15 19:19:16.601 E ns/openshift-machine-config-operator pod/machine-config-daemon-7jdwh node/ip-10-0-140-63.us-east-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
May 15 19:19:17.243 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
May 15 19:19:31.887 E ns/openshift-console pod/console-6955c6798f-w6bfd node/ip-10-0-150-62.us-east-2.compute.internal container/console container exited with code 2 (Error): 2020-05-15T19:16:59Z cmd/main: cookies are secure!\n2020-05-15T19:16:59Z cmd/main: Binding to [::]:8443...\n2020-05-15T19:16:59Z cmd/main: using TLS\n
May 15 19:20:54.883 E ns/openshift-marketplace pod/redhat-marketplace-6d444bc-6r7fk node/ip-10-0-158-250.us-east-2.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
May 15 19:20:58.891 E ns/openshift-marketplace pod/redhat-operators-7bdfbf479f-gsjlg node/ip-10-0-158-250.us-east-2.compute.internal container/redhat-operators container exited with code 2 (Error): 
May 15 19:21:12.508 E ns/openshift-machine-config-operator pod/machine-config-daemon-lv9sd node/ip-10-0-150-62.us-east-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
May 15 19:21:18.160 E ns/openshift-monitoring pod/prometheus-adapter-6b88c54b5f-z7fxr node/ip-10-0-158-250.us-east-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0515 19:02:16.180998       1 adapter.go:94] successfully using in-cluster auth\nI0515 19:02:16.628552       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0515 19:02:16.628597       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0515 19:02:16.629092       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0515 19:02:16.629976       1 secure_serving.go:178] Serving securely on [::]:6443\nI0515 19:02:16.630446       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
May 15 19:21:18.261 E ns/openshift-monitoring pod/grafana-76d78968db-8zpjk node/ip-10-0-158-250.us-east-2.compute.internal container/grafana container exited with code 1 (Error): 
May 15 19:21:18.261 E ns/openshift-monitoring pod/grafana-76d78968db-8zpjk node/ip-10-0-158-250.us-east-2.compute.internal container/grafana-proxy container exited with code 2 (Error): 
May 15 19:21:19.234 E ns/openshift-monitoring pod/openshift-state-metrics-7fb657774-gh79r node/ip-10-0-158-250.us-east-2.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
May 15 19:21:19.360 E ns/openshift-monitoring pod/thanos-querier-796b58c66d-xfkpw node/ip-10-0-158-250.us-east-2.compute.internal container/oauth-proxy container exited with code 2 (Error): 05:27 oauthproxy.go:774: basicauth: 10.130.0.41:58520 Authorization header does not start with 'Basic', skipping basic authentication\nE0515 19:05:27.965578       1 webhook.go:109] Failed to make webhook authenticator request: tokenreviews.authentication.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:thanos-querier" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\n2020/05/15 19:05:27 oauthproxy.go:782: requestauth: 10.130.0.41:58520 tokenreviews.authentication.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:thanos-querier" cannot create resource "tokenreviews" in API group "authentication.k8s.io" at the cluster scope\n2020/05/15 19:05:28 oauthproxy.go:774: basicauth: 10.130.0.41:58588 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:06:27 oauthproxy.go:774: basicauth: 10.130.0.41:60822 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:10:27 oauthproxy.go:774: basicauth: 10.130.0.41:36314 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:12:27 oauthproxy.go:774: basicauth: 10.130.0.41:38180 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:14:27 oauthproxy.go:774: basicauth: 10.130.0.41:40154 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:15:27 oauthproxy.go:774: basicauth: 10.130.0.41:41068 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:16:27 oauthproxy.go:774: basicauth: 10.130.0.41:41884 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:18:20 oauthproxy.go:774: basicauth: 10.128.0.94:49804 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:20:43 oauthproxy.go:774: basicauth: 10.130.0.33:41570 Authorization header does not start with 'Basic', skipping basic authentication\n
May 15 19:21:21.847 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
May 15 19:21:33.399 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-140-63.us-east-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-15T19:21:28.321Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-15T19:21:28.325Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-15T19:21:28.327Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-15T19:21:28.328Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-15T19:21:28.328Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-15T19:21:28.328Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-15T19:21:28.328Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-15T19:21:28.328Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-15T19:21:28.328Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-15T19:21:28.328Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-15T19:21:28.328Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-15T19:21:28.328Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-15T19:21:28.328Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-15T19:21:28.328Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-15T19:21:28.330Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-15T19:21:28.330Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-15
May 15 19:21:48.372 E ns/openshift-console pod/console-6955c6798f-qtd2g node/ip-10-0-132-95.us-east-2.compute.internal container/console container exited with code 2 (Error): 2020-05-15T19:02:49Z cmd/main: cookies are secure!\n2020-05-15T19:02:49Z cmd/main: Binding to [::]:8443...\n2020-05-15T19:02:49Z cmd/main: using TLS\n2020-05-15T19:19:46Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
May 15 19:22:05.748 E kube-apiserver Kube API started failing: Get https://api.ci-op-g6f6ltbv-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
May 15 19:23:21.333 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
May 15 19:23:29.330 E ns/openshift-machine-config-operator pod/machine-config-daemon-fr6vf node/ip-10-0-158-250.us-east-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
May 15 19:23:30.585 E ns/openshift-machine-config-operator pod/machine-config-daemon-zsvll node/ip-10-0-132-95.us-east-2.compute.internal container/oauth-proxy container exited with code 1 (Error): 
May 15 19:23:39.808 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-128-178.us-east-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/05/15 19:17:00 Watching directory: "/etc/alertmanager/config"\n
May 15 19:23:39.808 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-128-178.us-east-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/05/15 19:17:00 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/15 19:17:00 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/15 19:17:00 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/15 19:17:00 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/05/15 19:17:00 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/15 19:17:00 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/15 19:17:00 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/15 19:17:00 http.go:107: HTTPS: listening on [::]:9095\nI0515 19:17:00.911853       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
May 15 19:23:39.855 E ns/openshift-monitoring pod/prometheus-adapter-6b88c54b5f-dxlcq node/ip-10-0-128-178.us-east-2.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0515 19:16:47.301524       1 adapter.go:94] successfully using in-cluster auth\nI0515 19:16:48.413214       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0515 19:16:48.413271       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0515 19:16:48.413536       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0515 19:16:48.421157       1 secure_serving.go:178] Serving securely on [::]:6443\nI0515 19:16:48.421859       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nW0515 19:19:41.221124       1 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\n
May 15 19:23:39.886 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-128-178.us-east-2.compute.internal container/config-reloader container exited with code 2 (Error): 2020/05/15 19:01:44 Watching directory: "/etc/alertmanager/config"\n
May 15 19:23:39.886 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-128-178.us-east-2.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/05/15 19:01:45 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/15 19:01:45 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/15 19:01:45 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/15 19:01:46 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/05/15 19:01:46 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/15 19:01:46 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/15 19:01:46 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0515 19:01:46.004857       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/15 19:01:46 http.go:107: HTTPS: listening on [::]:9095\n
May 15 19:23:39.910 E ns/openshift-monitoring pod/telemeter-client-7f865c7dc4-7rwc5 node/ip-10-0-128-178.us-east-2.compute.internal container/reload container exited with code 2 (Error): 
May 15 19:23:39.910 E ns/openshift-monitoring pod/telemeter-client-7f865c7dc4-7rwc5 node/ip-10-0-128-178.us-east-2.compute.internal container/telemeter-client container exited with code 2 (Error): 
May 15 19:23:40.861 E ns/openshift-monitoring pod/thanos-querier-796b58c66d-prk6d node/ip-10-0-128-178.us-east-2.compute.internal container/oauth-proxy container exited with code 2 (Error): oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/05/15 19:16:48 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/15 19:16:48 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/05/15 19:16:48 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/15 19:16:48 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/05/15 19:16:48 http.go:107: HTTPS: listening on [::]:9091\nI0515 19:16:48.051067       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/15 19:17:02 oauthproxy.go:774: basicauth: 10.128.0.94:38130 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:19:01 oauthproxy.go:774: basicauth: 10.128.0.94:54164 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:19:34 oauthproxy.go:774: basicauth: 10.130.0.33:35530 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:21:33 oauthproxy.go:774: basicauth: 10.130.0.33:43926 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:21:33 oauthproxy.go:774: basicauth: 10.130.0.33:43926 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:22:44 oauthproxy.go:774: basicauth: 10.130.0.33:55698 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:22:44 oauthproxy.go:774: basicauth: 10.130.0.33:55698 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:23:34 oauthproxy.go:774: basicauth: 10.130.0.33:33524 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/15 19:23:34 oauthproxy.go:774: basicauth: 10.130.0.33:33524 Authorization header does not start with 'Basic', skipping basic authentication\n
May 15 19:23:54.530 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-158-250.us-east-2.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-15T19:23:53.091Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-15T19:23:53.100Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-15T19:23:53.101Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-15T19:23:53.102Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-15T19:23:53.102Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-15T19:23:53.102Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-15T19:23:53.102Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-15T19:23:53.102Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-15T19:23:53.102Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-15T19:23:53.102Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-15T19:23:53.102Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-15T19:23:53.102Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-15T19:23:53.102Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-15T19:23:53.102Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-15T19:23:53.103Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-15T19:23:53.103Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-15
May 15 19:24:07.919 E ns/e2e-k8s-sig-apps-job-upgrade-6903 pod/foo-kfgn4 node/ip-10-0-128-178.us-east-2.compute.internal container/c container exited with code 137 (Error): 
May 15 19:24:07.940 E ns/e2e-k8s-sig-apps-job-upgrade-6903 pod/foo-9jkc8 node/ip-10-0-128-178.us-east-2.compute.internal container/c container exited with code 137 (Error): 
May 15 19:24:23.958 E ns/e2e-k8s-service-lb-available-9962 pod/service-test-2kbg5 node/ip-10-0-128-178.us-east-2.compute.internal container/netexec container exited with code 2 (Error): 
May 15 19:26:06.503 E ns/openshift-machine-config-operator pod/machine-config-daemon-sv6sm node/ip-10-0-128-178.us-east-2.compute.internal container/oauth-proxy container exited with code 1 (Error):