ResultSUCCESS
Tests 3 failed / 121 succeeded
Started2021-04-09 18:41
Elapsed2h6m
Work namespaceci-op-9trl3s6p
Refs openshift-4.7:cca97c76
74:6e275211
pod203ed95c-9963-11eb-97cc-0a580a800323
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade [sig-imageregistry] Image registry remain available 1h5m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-imageregistry\]\sImage\sregistry\sremain\savailable$'
Image registry was unreachable during disruption for at least 2s of 1h5m12s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Apr 09 19:58:01.254 E image-registry Route stopped responding to GET requests over new connections
Apr 09 19:58:01.254 E image-registry Route stopped responding to GET requests on reused connections
Apr 09 19:58:01.476 I image-registry Route started responding to GET requests on reused connections
Apr 09 19:58:01.477 I image-registry Route started responding to GET requests over new connections
				from junit_upgrade_1618000431.xml

Filter through log files


Cluster upgrade [sig-network-edge] Application behind service load balancer with PDB is not disrupted 1h6m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-network\-edge\]\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 4s of 1h1m56s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Apr 09 20:11:05.821 E ns/e2e-k8s-service-lb-available-745 svc/service-test Service stopped responding to GET requests over new connections
Apr 09 20:11:05.983 I ns/e2e-k8s-service-lb-available-745 svc/service-test Service started responding to GET requests over new connections
Apr 09 20:23:06.966 E ns/e2e-k8s-service-lb-available-745 svc/service-test Service stopped responding to GET requests over new connections
Apr 09 20:23:07.820 E ns/e2e-k8s-service-lb-available-745 svc/service-test Service is not responding to GET requests over new connections
Apr 09 20:23:07.967 I ns/e2e-k8s-service-lb-available-745 svc/service-test Service started responding to GET requests over new connections
Apr 09 20:23:52.821 E ns/e2e-k8s-service-lb-available-745 svc/service-test Service stopped responding to GET requests over new connections
Apr 09 20:23:52.990 I ns/e2e-k8s-service-lb-available-745 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1618000431.xml

Filter through log files


openshift-tests [sig-arch] Monitor cluster while tests execute 1h10m

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-arch\]\sMonitor\scluster\swhile\stests\sexecute$'
90 error level events were detected during this test run:

Apr 09 19:30:50.709 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: 2 of 3 members are available, ip-10-0-137-160.us-west-1.compute.internal is unhealthy
Apr 09 19:52:05.272 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()
Apr 09 19:56:14.370 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-778b8fdc68-jjq2t node/ip-10-0-167-141.us-west-1.compute.internal container/kube-storage-version-migrator-operator container exited with code 1 (Error): "message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2021-04-09T19:13:31Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2021-04-09T19:06:54Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}\nI0409 19:13:31.564473       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"ba2b8e76-e1bf-47a5-8d53-8bf2e232a854", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("All is well")\nI0409 19:56:13.432176       1 cmd.go:88] Received SIGTERM or SIGINT signal, shutting down controller.\nI0409 19:56:13.432537       1 reflector.go:213] Stopping reflector *v1.ClusterOperator (10m0s) from k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156\nI0409 19:56:13.432593       1 reflector.go:213] Stopping reflector *v1.Deployment (10m0s) from k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156\nI0409 19:56:13.432634       1 reflector.go:213] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156\nI0409 19:56:13.432672       1 reflector.go:213] Stopping reflector *unstructured.Unstructured (12h0m0s) from k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156\nI0409 19:56:13.432721       1 base_controller.go:166] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0409 19:56:13.432746       1 base_controller.go:144] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated\nI0409 19:56:13.432763       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nI0409 19:56:13.432779       1 base_controller.go:166] Shutting down LoggingSyncer ...\nW0409 19:56:13.432840       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Apr 09 19:56:32.437 E ns/openshift-cluster-machine-approver pod/machine-approver-799965c46c-n7jk7 node/ip-10-0-167-141.us-west-1.compute.internal container/machine-approver-controller container exited with code 2 (Error): s.k8s.io/v1 CertificateSigningRequest\nW0409 19:28:07.863550       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nW0409 19:34:07.865498       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nW0409 19:35:20.382655       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nW0409 19:35:21.430851       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nW0409 19:35:21.437675       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nW0409 19:40:15.235742       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nW0409 19:40:16.610652       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nW0409 19:40:16.614535       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nW0409 19:46:03.616050       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nW0409 19:55:55.618223       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\n
Apr 09 19:56:35.368 E ns/openshift-kube-storage-version-migrator pod/migrator-6b87d6c5f4-xwr4l node/ip-10-0-250-247.us-west-1.compute.internal container/migrator container exited with code 2 (Error): I0409 19:13:29.183330       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0409 19:13:29.184341       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0409 19:13:29.184408       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0409 19:13:29.184458       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0409 19:13:29.184505       1 migrator.go:18] FLAG: --kubeconfig=""\nI0409 19:13:29.184548       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0409 19:13:29.*****7       1 migrator.go:18] FLAG: --log_dir=""\nI0409 19:13:29.184639       1 migrator.go:18] FLAG: --log_file=""\nI0409 19:13:29.184677       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0409 19:13:29.184715       1 migrator.go:18] FLAG: --logtostderr="true"\nI0409 19:13:29.184755       1 migrator.go:18] FLAG: --skip_headers="false"\nI0409 19:13:29.184793       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0409 19:13:29.184832       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0409 19:13:29.184868       1 migrator.go:18] FLAG: --v="2"\nI0409 19:13:29.184906       1 migrator.go:18] FLAG: --vmodule=""\n
Apr 09 19:56:38.554 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-564548c89c-gs7c5 node/ip-10-0-167-141.us-west-1.compute.internal container/openshift-controller-manager-operator container exited with code 1 (Error):  from k8s.io/client-go@v0.20.1-rc.0/tools/cache/reflector.go:167\nI0409 19:56:37.528314       1 reflector.go:225] Stopping reflector *v1.Build (10m0s) from k8s.io/client-go@v0.20.1-rc.0/tools/cache/reflector.go:167\nI0409 19:56:37.528340       1 base_controller.go:166] Shutting down UserCAObservationController ...\nI0409 19:56:37.528350       1 base_controller.go:166] Shutting down ConfigObserver ...\nI0409 19:56:37.528359       1 base_controller.go:166] Shutting down ResourceSyncController ...\nI0409 19:56:37.528373       1 base_controller.go:166] Shutting down StatusSyncer_openshift-controller-manager ...\nI0409 19:56:37.529108       1 base_controller.go:144] All StatusSyncer_openshift-controller-manager post start hooks have been terminated\nI0409 19:56:37.528431       1 base_controller.go:113] Shutting down worker of UserCAObservationController controller ...\nI0409 19:56:37.529119       1 base_controller.go:103] All UserCAObservationController workers have been terminated\nI0409 19:56:37.528442       1 base_controller.go:113] Shutting down worker of ConfigObserver controller ...\nI0409 19:56:37.529182       1 base_controller.go:103] All ConfigObserver workers have been terminated\nI0409 19:56:37.528452       1 base_controller.go:113] Shutting down worker of ResourceSyncController controller ...\nI0409 19:56:37.529208       1 base_controller.go:103] All ResourceSyncController workers have been terminated\nI0409 19:56:37.528461       1 base_controller.go:113] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0409 19:56:37.529217       1 base_controller.go:103] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0409 19:56:37.528487       1 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/client-go@v0.20.1-rc.0/tools/cache/reflector.go:167\nI0409 19:56:37.528551       1 operator.go:141] Shutting down OpenShiftControllerManagerOperator\nW0409 19:56:37.528580       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Apr 09 19:56:50.151 E ns/openshift-monitoring pod/node-exporter-dtfdz node/ip-10-0-187-40.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2021-04-09T19:13:25.874Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2021-04-09T19:13:25.875Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Apr 09 19:57:02.407 E ns/openshift-monitoring pod/node-exporter-25hzn node/ip-10-0-137-160.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): collector=netstat\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2021-04-09T19:09:00.821Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2021-04-09T19:09:00.821Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\nlevel=error ts=2021-04-09T19:51:21.226Z caller=collector.go:161 msg="collector failed" name=netclass duration_seconds=0.084619711 err="could not get net class info: error obtaining net class info: open /host/sys/class/net/veth7aa2fb34: no such file or directory"\n
Apr 09 19:57:09.161 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-vlp95 node/ip-10-0-250-247.us-west-1.compute.internal container/csi-liveness-probe container exited with code 2 (Error): 
Apr 09 19:57:09.161 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-vlp95 node/ip-10-0-250-247.us-west-1.compute.internal container/csi-driver container exited with code 2 (Error): 
Apr 09 19:57:11.455 E ns/openshift-controller-manager pod/controller-manager-9k9qc node/ip-10-0-137-160.us-west-1.compute.internal container/controller-manager container exited with code 137 (Error): request from succeeding\nW0409 19:53:42.830777       1 reflector.go:436] k8s.io/client-go@v0.20.0/tools/cache/reflector.go:167: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 421; INTERNAL_ERROR") has prevented the request from succeeding\nW0409 19:53:42.840274       1 reflector.go:436] k8s.io/client-go@v0.20.0/tools/cache/reflector.go:167: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 357; INTERNAL_ERROR") has prevented the request from succeeding\nW0409 19:53:42.843481       1 reflector.go:436] k8s.io/client-go@v0.20.0/tools/cache/reflector.go:167: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 541; INTERNAL_ERROR") has prevented the request from succeeding\nW0409 19:53:42.846797       1 reflector.go:436] k8s.io/client-go@v0.20.0/tools/cache/reflector.go:167: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 537; INTERNAL_ERROR") has prevented the request from succeeding\nE0409 19:56:46.346173       1 imagestream_controller.go:136] Error syncing image stream "openshift/jenkins-agent-base": Operation cannot be fulfilled on imagestreamimports.image.openshift.io "jenkins-agent-base": the object has been modified; please apply your changes to the latest version and try again\nE0409 19:56:47.533363       1 imagestream_controller.go:136] Error syncing image stream "openshift/jenkins-agent-base": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins-agent-base": the image stream was updated from "42408" to "42624"\nE0409 19:56:47.557942       1 imagestream_controller.go:136] Error syncing image stream "openshift/jenkins-agent-base": Operation cannot be fulfilled on imagestream.image.openshift.io "jenkins-agent-base": the image stream was updated from "42408" to "42624"\n
Apr 09 19:57:14.155 E ns/openshift-monitoring pod/node-exporter-jbzwq node/ip-10-0-204-119.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2021-04-09T19:07:15.311Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2021-04-09T19:07:15.311Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Apr 09 19:57:14.206 E ns/openshift-ingress-canary pod/ingress-canary-l8b9z node/ip-10-0-187-40.us-west-1.compute.internal container/hello-openshift-canary container exited with code 2 (Error): serving on 8888\nserving on 8080\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\nServicing request.\n
Apr 09 19:57:21.609 E ns/openshift-monitoring pod/node-exporter-p9cz8 node/ip-10-0-167-141.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2021-04-09T19:07:26.974Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2021-04-09T19:07:26.974Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Apr 09 19:57:27.455 E ns/openshift-monitoring pod/kube-state-metrics-68b6d64d8d-mmgcb node/ip-10-0-250-247.us-west-1.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Apr 09 19:57:27.510 E ns/openshift-monitoring pod/telemeter-client-7dc9757b86-bvg8f node/ip-10-0-250-247.us-west-1.compute.internal container/telemeter-client container exited with code 2 (Error): 
Apr 09 19:57:27.510 E ns/openshift-monitoring pod/telemeter-client-7dc9757b86-bvg8f node/ip-10-0-250-247.us-west-1.compute.internal container/reload container exited with code 2 (Error): 
Apr 09 19:57:35.699 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-250-247.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2021/04/09 19:13:55 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2021/04/09 19:13:55 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/09 19:13:55 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/09 19:13:55 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2021/04/09 19:13:55 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/09 19:13:55 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2021/04/09 19:13:55 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/09 19:13:55 http.go:107: HTTPS: listening on [::]:9095\nI0409 19:13:55.332684       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 09 19:57:35.699 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-250-247.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): level=info ts=2021-04-09T19:13:54.534924785Z caller=main.go:147 msg="Starting prometheus-config-reloader" version="(version=0.44.1, branch=master, revision=1f0fd51d)"\nlevel=info ts=2021-04-09T19:13:54.535011074Z caller=main.go:148 build_context="(go=go1.15.5, user=root, date=20210118-21:06:52)"\nlevel=info ts=2021-04-09T19:13:54.53522979Z caller=main.go:182 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2021-04-09T19:13:54.53606191Z caller=reloader.go:214 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2021-04-09T19:13:56.497810283Z caller=reloader.go:347 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Apr 09 19:57:36.945 E ns/openshift-monitoring pod/node-exporter-tsv9h node/ip-10-0-175-2.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2021-04-09T19:15:08.990Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2021-04-09T19:15:08.991Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2021-04-09T19:15:08.991Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Apr 09 19:57:39.757 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-250-247.us-west-1.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2021-04-09T19:57:26.642Z caller=main.go:289 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Apr 09 19:57:46.014 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-pd84b node/ip-10-0-175-2.us-west-1.compute.internal container/csi-driver container exited with code 2 (Error): 
Apr 09 19:57:46.014 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-pd84b node/ip-10-0-175-2.us-west-1.compute.internal container/csi-liveness-probe container exited with code 2 (Error): 
Apr 09 19:57:47.569 E ns/openshift-cloud-credential-operator pod/pod-identity-webhook-67dcb6cddd-8qhwd node/ip-10-0-137-160.us-west-1.compute.internal container/pod-identity-webhook container exited with code 137 (Error): 
Apr 09 19:57:48.144 E ns/openshift-monitoring pod/node-exporter-wfxzx node/ip-10-0-250-247.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2021-04-09T19:13:05.825Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2021-04-09T19:13:05.825Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Apr 09 19:57:51.337 E ns/openshift-console pod/console-55b56888fd-mdnzg node/ip-10-0-204-119.us-west-1.compute.internal container/console container exited with code 2 (Error): W0409 19:16:21.718821       1 main.go:211] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0409 19:16:21.718912       1 main.go:288] cookies are secure!\nE0409 19:16:21.760089       1 auth.go:235] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nI0409 19:16:31.809174       1 main.go:670] Binding to [::]:8443...\nI0409 19:16:31.809209       1 main.go:672] using TLS\n
Apr 09 19:57:54.175 E ns/openshift-monitoring pod/thanos-querier-5c96c9dfcb-f4hgs node/ip-10-0-250-247.us-west-1.compute.internal container/oauth-proxy container exited with code 2 (Error): 2021/04/09 19:13:53 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2021/04/09 19:13:53 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/09 19:13:53 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/09 19:13:53 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2021/04/09 19:13:53 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/09 19:13:53 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2021/04/09 19:13:53 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/09 19:13:53 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2021/04/09 19:13:53 http.go:107: HTTPS: listening on [::]:9091\nI0409 19:13:53.968032       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 09 19:58:03.675 E ns/openshift-console pod/console-55b56888fd-tf2s6 node/ip-10-0-137-160.us-west-1.compute.internal container/console container exited with code 2 (Error): iled: 404 Not Found\nE0409 19:14:41.786290       1 auth.go:235] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0409 19:14:51.792555       1 auth.go:235] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0409 19:15:01.797982       1 auth.go:235] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0409 19:15:11.804251       1 auth.go:235] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0409 19:15:21.810294       1 auth.go:235] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0409 19:15:31.816458       1 auth.go:235] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0409 19:15:41.822738       1 auth.go:235] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0409 19:15:51.831653       1 auth.go:235] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nE0409 19:16:01.837515       1 auth.go:235] error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\nI0409 19:16:11.865598       1 main.go:670] Binding to [::]:8443...\nI0409 19:16:11.865700       1 main.go:672] using TLS\n
Apr 09 19:58:08.382 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-fcb5x node/ip-10-0-187-40.us-west-1.compute.internal container/csi-driver container exited with code 2 (Error): 
Apr 09 19:58:08.382 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-fcb5x node/ip-10-0-187-40.us-west-1.compute.internal container/csi-liveness-probe container exited with code 2 (Error): 
Apr 09 19:58:10.352 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-250-247.us-west-1.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2021-04-09T19:58:07.913Z caller=main.go:289 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Apr 09 19:58:23.746 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-hx29g node/ip-10-0-137-160.us-west-1.compute.internal container/csi-driver container exited with code 2 (Error): 
Apr 09 19:58:23.746 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-hx29g node/ip-10-0-137-160.us-west-1.compute.internal container/csi-liveness-probe container exited with code 2 (Error): 
Apr 09 19:59:20.731 E clusteroperator/authentication changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()
Apr 09 19:59:46.761 E ns/openshift-sdn pod/sdn-controller-nsxxb node/ip-10-0-204-119.us-west-1.compute.internal container/sdn-controller container exited with code 2 (Error): go:117] Allocated netid 7999331 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-9601"\nI0409 19:23:35.229914       1 vnids.go:117] Allocated netid 700745 for namespace "e2e-kubernetes-api-available-reused-connections-5683"\nI0409 19:23:35.255420       1 vnids.go:117] Allocated netid 922810 for namespace "e2e-oauth-api-available-new-connections-5207"\nI0409 19:23:35.272533       1 vnids.go:117] Allocated netid 3375116 for namespace "e2e-oauth-api-available-reused-connections-2720"\nI0409 19:23:35.285015       1 vnids.go:117] Allocated netid 15253549 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-4729"\nI0409 19:23:35.293900       1 vnids.go:117] Allocated netid 1161022 for namespace "e2e-image-registry-available-1208"\nI0409 19:23:35.311151       1 vnids.go:117] Allocated netid 1075056 for namespace "e2e-openshift-api-available-new-connections-9108"\nI0409 19:23:35.333830       1 vnids.go:117] Allocated netid 817259 for namespace "e2e-k8s-sig-apps-deployment-upgrade-9163"\nI0409 19:23:35.357600       1 vnids.go:117] Allocated netid 6545928 for namespace "e2e-openshift-api-available-reused-connections-495"\nI0409 19:23:35.562697       1 vnids.go:117] Allocated netid 1498218 for namespace "e2e-k8s-sig-apps-job-upgrade-8852"\nI0409 19:23:35.754524       1 vnids.go:117] Allocated netid 541702 for namespace "e2e-frontend-ingress-available-9617"\nI0409 19:23:35.967165       1 vnids.go:117] Allocated netid 1109149 for namespace "e2e-k8s-service-lb-available-745"\nI0409 19:23:36.150495       1 vnids.go:117] Allocated netid 9683230 for namespace "e2e-kubernetes-api-available-new-connections-5277"\nI0409 19:23:54.124209       1 vnids.go:139] Released netid 5689458 for namespace "e2e-test-prometheus-vbcwk"\nE0409 19:43:54.668462       1 leaderelection.go:361] Failed to update lock: Put "https://api-int.ci-op-9trl3s6p-48ce9.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller": read tcp 10.0.204.119:40208->10.0.172.105:6443: read: connection reset by peer\n
Apr 09 19:59:47.261 E ns/openshift-network-diagnostics pod/network-check-target-f7vhp node/ip-10-0-167-141.us-west-1.compute.internal container/network-check-target-container container exited with code 2 (Error): 
Apr 09 20:00:03.365 E ns/openshift-sdn pod/sdn-controller-6vmv4 node/ip-10-0-167-141.us-west-1.compute.internal container/sdn-controller container exited with code 2 (Error): I0409 19:05:59.223049       1 leaderelection.go:243] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0409 19:09:19.976855       1 leaderelection.go:325] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\n
Apr 09 20:00:13.082 E ns/openshift-sdn pod/sdn-controller-grlcx node/ip-10-0-137-160.us-west-1.compute.internal container/sdn-controller container exited with code 2 (Error): I0409 19:09:00.150689       1 leaderelection.go:243] attempting to acquire leader lease openshift-sdn/openshift-network-controller...\nE0409 19:09:19.853488       1 leaderelection.go:325] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\n
Apr 09 20:00:16.414 E ns/openshift-multus pod/multus-vnpb6 node/ip-10-0-167-141.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 09 20:00:16.455 E ns/openshift-multus pod/multus-admission-controller-fx26r node/ip-10-0-167-141.us-west-1.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 09 20:00:54.217 E ns/openshift-multus pod/multus-admission-controller-8fhc7 node/ip-10-0-137-160.us-west-1.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 09 20:01:06.159 E ns/openshift-multus pod/multus-nxc8n node/ip-10-0-204-119.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 09 20:01:13.638 E ns/openshift-network-diagnostics pod/network-check-target-zdm65 node/ip-10-0-175-2.us-west-1.compute.internal container/network-check-target-container container exited with code 2 (Error): 
Apr 09 20:01:34.180 E ns/openshift-multus pod/multus-admission-controller-xjdmc node/ip-10-0-204-119.us-west-1.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Apr 09 20:02:07.295 E ns/openshift-multus pod/multus-nr8xx node/ip-10-0-250-247.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 09 20:02:57.169 E ns/openshift-multus pod/multus-tmm2n node/ip-10-0-187-40.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 09 20:03:12.629 E ns/openshift-network-diagnostics pod/network-check-target-js94d node/ip-10-0-137-160.us-west-1.compute.internal container/network-check-target-container container exited with code 2 (Error): 
Apr 09 20:03:47.165 E ns/openshift-multus pod/multus-9x7wk node/ip-10-0-175-2.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 09 20:04:40.981 E ns/openshift-multus pod/multus-z8p4p node/ip-10-0-137-160.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Apr 09 20:05:54.585 E ns/openshift-machine-config-operator pod/machine-config-operator-6b6cb44f7b-2k78r node/ip-10-0-167-141.us-west-1.compute.internal container/machine-config-operator container exited with code 2 (Error):  in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nI0409 19:55:36.486745       1 sync.go:569] syncing Required MachineConfigPools\nW0409 19:55:38.307404       1 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nI0409 19:55:43.818740       1 sync.go:569] syncing Required MachineConfigPools\nW0409 19:56:59.425984       1 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nI0409 19:57:04.740770       1 sync.go:569] syncing Required MachineConfigPools\nW0409 19:57:06.558065       1 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nI0409 19:57:12.090634       1 sync.go:569] syncing Required MachineConfigPools\nW0409 19:59:45.310868       1 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0409 20:02:10.767891       1 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nI0409 20:02:15.911609       1 sync.go:569] syncing Required MachineConfigPools\nW0409 20:02:17.728499       1 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nI0409 20:02:23.236719       1 sync.go:569] syncing Required MachineConfigPools\nW0409 20:05:48.015613       1 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nI0409 20:05:53.181459       1 sync.go:569] syncing Required MachineConfigPools\n
Apr 09 20:07:49.912 E ns/openshift-machine-config-operator pod/machine-config-daemon-dm8t4 node/ip-10-0-175-2.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 09 20:08:03.440 E ns/openshift-machine-config-operator pod/machine-config-daemon-h5c9s node/ip-10-0-204-119.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 09 20:08:15.029 E ns/openshift-machine-config-operator pod/machine-config-daemon-wf22l node/ip-10-0-167-141.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 09 20:08:24.907 E ns/openshift-machine-config-operator pod/machine-config-daemon-hhc4k node/ip-10-0-187-40.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 09 20:08:30.584 E ns/openshift-machine-config-operator pod/machine-config-daemon-nsk75 node/ip-10-0-137-160.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Apr 09 20:08:48.654 E ns/openshift-machine-config-operator pod/machine-config-controller-5fb6575598-bn669 node/ip-10-0-204-119.us-west-1.compute.internal container/machine-config-controller container exited with code 2 (Error): = rendered-worker-0d7ac1955e5acd182a018ff4a70e0a87\nI0409 19:11:36.416613       1 node_controller.go:419] Pool worker: node ip-10-0-250-247.us-west-1.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done\nI0409 19:12:06.257057       1 node_controller.go:419] Pool worker: node ip-10-0-250-247.us-west-1.compute.internal: Reporting ready\nI0409 19:13:28.354880       1 node_controller.go:419] Pool worker: node ip-10-0-250-247.us-west-1.compute.internal: changed labels\nI0409 19:13:36.546081       1 node_controller.go:419] Pool worker: node ip-10-0-187-40.us-west-1.compute.internal: changed annotation machineconfiguration.openshift.io/currentConfig = rendered-worker-0d7ac1955e5acd182a018ff4a70e0a87\nI0409 19:13:36.546103       1 node_controller.go:419] Pool worker: node ip-10-0-187-40.us-west-1.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-0d7ac1955e5acd182a018ff4a70e0a87\nI0409 19:13:36.546108       1 node_controller.go:419] Pool worker: node ip-10-0-187-40.us-west-1.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done\nI0409 19:14:07.170865       1 node_controller.go:419] Pool worker: node ip-10-0-187-40.us-west-1.compute.internal: Reporting ready\nI0409 19:15:19.760042       1 node_controller.go:419] Pool worker: node ip-10-0-175-2.us-west-1.compute.internal: changed annotation machineconfiguration.openshift.io/currentConfig = rendered-worker-0d7ac1955e5acd182a018ff4a70e0a87\nI0409 19:15:19.760062       1 node_controller.go:419] Pool worker: node ip-10-0-175-2.us-west-1.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-0d7ac1955e5acd182a018ff4a70e0a87\nI0409 19:15:19.760068       1 node_controller.go:419] Pool worker: node ip-10-0-175-2.us-west-1.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done\nI0409 19:15:52.615590       1 node_controller.go:419] Pool worker: node ip-10-0-175-2.us-west-1.compute.internal: Reporting ready\n
Apr 09 20:10:57.251 E ns/openshift-marketplace pod/certified-operators-qjzps node/ip-10-0-250-247.us-west-1.compute.internal container/registry-server container exited with code 2 (Error): 
Apr 09 20:10:57.321 E ns/openshift-marketplace pod/redhat-marketplace-sqcc2 node/ip-10-0-250-247.us-west-1.compute.internal container/registry-server container exited with code 2 (Error): 
Apr 09 20:10:58.357 E ns/openshift-machine-api pod/machine-api-operator-59b579d9d8-gv5bg node/ip-10-0-204-119.us-west-1.compute.internal container/machine-api-operator container exited with code 2 (Error): 
Apr 09 20:10:58.706 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-250-247.us-west-1.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2021/04/09 19:58:08 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2021/04/09 19:58:08 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/09 19:58:08 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/09 19:58:08 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2021/04/09 19:58:08 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/09 19:58:08 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2021/04/09 19:58:08 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/09 19:58:08 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\n2021/04/09 19:58:08 http.go:107: HTTPS: listening on [::]:9091\nI0409 19:58:08.777429       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 09 20:10:58.706 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-250-247.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): level=info ts=2021-04-09T19:58:08.127513637Z caller=main.go:147 msg="Starting prometheus-config-reloader" version="(version=0.44.1, branch=master, revision=1f0fd51d)"\nlevel=info ts=2021-04-09T19:58:08.1275969Z caller=main.go:148 build_context="(go=go1.15.5, user=root, date=20210118-21:06:52)"\nlevel=info ts=2021-04-09T19:58:08.12868475Z caller=main.go:182 msg="Starting web server for metrics" listen=localhost:8080\nlevel=error ts=2021-04-09T19:58:08.135714281Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9090/-/reload\": dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2021-04-09T19:58:17.797593394Z caller=reloader.go:347 msg="Reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml watched_dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\nlevel=info ts=2021-04-09T19:58:17.797730448Z caller=reloader.go:214 msg="started watching config file and directories for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=/etc/prometheus/rules/prometheus-k8s-rulefiles-0\n
Apr 09 20:11:01.409 E ns/openshift-machine-config-operator pod/machine-config-server-wfwjv node/ip-10-0-204-119.us-west-1.compute.internal container/machine-config-server container exited with code 2 (Error): I0409 19:07:19.787798       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-668-ga29f0d80-dirty (a29f0d80366f4a6eb72f163cb11249b8b965a22b)\nI0409 19:07:19.789628       1 api.go:72] Launching server on :22624\nI0409 19:07:19.789628       1 api.go:72] Launching server on :22623\nI0409 19:09:01.724610       1 api.go:117] Pool worker requested by address:"10.0.172.105:4946" User-Agent:"Ignition/2.9.0" Accept-Header: "application/vnd.coreos.ignition+json;version=3.2.0, */*;q=0.1"\n
Apr 09 20:11:41.376 E ns/e2e-k8s-service-lb-available-745 pod/service-test-77qmw node/ip-10-0-250-247.us-west-1.compute.internal container/netexec container exited with code 2 (Error): 
Apr 09 20:13:03.189 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()
Apr 09 20:13:14.631 E clusteroperator/authentication changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()
Apr 09 20:13:21.297 E clusteroperator/dns changed Degraded to True: DNSDegraded: DNS default is degraded
Apr 09 20:15:06.420 E ns/openshift-console-operator pod/console-operator-85c6959759-k5zwl node/ip-10-0-167-141.us-west-1.compute.internal container/console-operator container exited with code 1 (Error): iledUpdate 1 replicas ready at version 4.7.0-0.ci.test-2021-04-09-184435-ci-op-9trl3s6p\nI0409 20:15:03.738031       1 status_controller.go:172] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2021-04-09T19:16:09Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2021-04-09T19:57:38Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2021-04-09T20:15:03Z","message":"DeploymentAvailable: 1 replicas ready at version 4.7.0-0.ci.test-2021-04-09-184435-ci-op-9trl3s6p","reason":"Deployment_FailedUpdate","status":"False","type":"Available"},{"lastTransitionTime":"2021-04-09T19:12:55Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0409 20:15:03.790529       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"f422351e-311c-4d8c-9a97-bca02a2dfc1f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from True to False ("DeploymentAvailable: 1 replicas ready at version 4.7.0-0.ci.test-2021-04-09-184435-ci-op-9trl3s6p")\nI0409 20:15:04.115251       1 cmd.go:88] Received SIGTERM or SIGINT signal, shutting down controller.\nE0409 20:15:04.115853       1 status.go:78] RouteHealthDegraded FailedLoadCA failed to read CA to check route health: context canceled\nI0409 20:15:04.122745       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0409 20:15:04.122854       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0409 20:15:04.122970       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nW0409 20:15:04.123454       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Apr 09 20:15:10.372 E ns/openshift-service-ca-operator pod/service-ca-operator-778c775d95-prlct node/ip-10-0-167-141.us-west-1.compute.internal container/service-ca-operator container exited with code 1 (Error): 
Apr 09 20:15:17.831 E ns/openshift-monitoring pod/prometheus-adapter-d5bd6bc75-49djm node/ip-10-0-175-2.us-west-1.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0409 19:57:03.247170       1 adapter.go:98] successfully using in-cluster auth\nI0409 19:57:04.694658       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0409 19:57:04.694661       1 dynamic_cafile_content.go:167] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0409 19:57:04.694988       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0409 19:57:04.696290       1 secure_serving.go:197] Serving securely on [::]:6443\nI0409 19:57:04.696527       1 tlsconfig.go:240] Starting DynamicServingCertificateController\n
Apr 09 20:15:17.919 E ns/openshift-marketplace pod/redhat-marketplace-shrsj node/ip-10-0-175-2.us-west-1.compute.internal container/registry-server container exited with code 2 (Error): 
Apr 09 20:15:18.913 E ns/openshift-monitoring pod/telemeter-client-5775b5f775-46tpt node/ip-10-0-175-2.us-west-1.compute.internal container/reload container exited with code 2 (Error): 
Apr 09 20:15:18.913 E ns/openshift-monitoring pod/telemeter-client-5775b5f775-46tpt node/ip-10-0-175-2.us-west-1.compute.internal container/telemeter-client container exited with code 2 (Error): 
Apr 09 20:15:18.999 E ns/openshift-monitoring pod/thanos-querier-cb79fb999-fwdxr node/ip-10-0-175-2.us-west-1.compute.internal container/oauth-proxy container exited with code 2 (Error): 2021/04/09 19:57:27 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2021/04/09 19:57:27 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/09 19:57:27 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/09 19:57:27 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2021/04/09 19:57:27 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/09 19:57:27 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2021/04/09 19:57:27 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/09 19:57:27 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0409 19:57:27.565075       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2021/04/09 19:57:27 http.go:107: HTTPS: listening on [::]:9091\n
Apr 09 20:15:19.964 E ns/openshift-kube-storage-version-migrator pod/migrator-bfb44b667-wn2tk node/ip-10-0-175-2.us-west-1.compute.internal container/migrator container exited with code 2 (Error): I0409 19:56:34.045060       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0409 19:56:34.045168       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0409 19:56:34.045178       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0409 19:56:34.045186       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0409 19:56:34.045194       1 migrator.go:18] FLAG: --kubeconfig=""\nI0409 19:56:34.045202       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0409 19:56:34.045211       1 migrator.go:18] FLAG: --log_dir=""\nI0409 19:56:34.045219       1 migrator.go:18] FLAG: --log_file=""\nI0409 19:56:34.045225       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0409 19:56:34.045233       1 migrator.go:18] FLAG: --logtostderr="true"\nI0409 19:56:34.045238       1 migrator.go:18] FLAG: --skip_headers="false"\nI0409 19:56:34.045245       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0409 19:56:34.045251       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0409 19:56:34.045257       1 migrator.go:18] FLAG: --v="2"\nI0409 19:56:34.045264       1 migrator.go:18] FLAG: --vmodule=""\n
Apr 09 20:15:20.009 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-175-2.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2021/04/09 19:57:32 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2021/04/09 19:57:32 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/09 19:57:32 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/09 19:57:32 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2021/04/09 19:57:32 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/09 19:57:32 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2021/04/09 19:57:32 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/09 19:57:32 http.go:107: HTTPS: listening on [::]:9095\nI0409 19:57:32.969757       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Apr 09 20:15:20.009 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-175-2.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): level=info ts=2021-04-09T19:57:32.566952963Z caller=main.go:147 msg="Starting prometheus-config-reloader" version="(version=0.44.1, branch=master, revision=1f0fd51d)"\nlevel=info ts=2021-04-09T19:57:32.567024071Z caller=main.go:148 build_context="(go=go1.15.5, user=root, date=20210118-21:06:52)"\nlevel=info ts=2021-04-09T19:57:32.567217585Z caller=main.go:182 msg="Starting web server for metrics" listen=localhost:8080\nlevel=info ts=2021-04-09T19:57:32.568522918Z caller=reloader.go:214 msg="started watching config file and directories for changes" cfg= out= dirs=/etc/alertmanager/config,/etc/alertmanager/secrets/alertmanager-main-tls,/etc/alertmanager/secrets/alertmanager-main-proxy,/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\nlevel=info ts=2021-04-09T19:57:34.968198882Z caller=reloader.go:347 msg="Reload triggered" cfg_in= cfg_out= watched_dirs="/etc/alertmanager/config, /etc/alertmanager/secrets/alertmanager-main-tls, /etc/alertmanager/secrets/alertmanager-main-proxy, /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy"\n
Apr 09 20:15:20.038 E ns/openshift-monitoring pod/grafana-7fcf8cf57c-9ks44 node/ip-10-0-175-2.us-west-1.compute.internal container/grafana-proxy container exited with code 2 (Error): 
Apr 09 20:15:29.551 E ns/openshift-console pod/console-5c5b49cdc4-ghdmw node/ip-10-0-167-141.us-west-1.compute.internal container/console container exited with code 2 (Error): W0409 19:57:33.565420       1 main.go:211] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0409 19:57:33.565512       1 main.go:288] cookies are secure!\nI0409 19:57:33.613701       1 main.go:670] Binding to [::]:8443...\nI0409 19:57:33.613723       1 main.go:672] using TLS\n
Apr 09 20:15:33.319 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-250-247.us-west-1.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2021-04-09T20:15:29.361Z caller=main.go:289 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Apr 09 20:15:34.572 E ns/openshift-cloud-credential-operator pod/pod-identity-webhook-848976b9cf-7r7gf node/ip-10-0-167-141.us-west-1.compute.internal container/pod-identity-webhook container exited with code 137 (Error): 
Apr 09 20:15:41.437 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-250-247.us-west-1.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2021-04-09T20:15:38.450Z caller=main.go:289 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Apr 09 20:19:46.704 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator openshift-apiserver is not available
Apr 09 20:21:01.169 E clusteroperator/dns changed Degraded to True: DNSDegraded: DNS default is degraded
Apr 09 20:21:21.927 E ns/openshift-cluster-machine-approver pod/machine-approver-6c8896db89-7mf22 node/ip-10-0-137-160.us-west-1.compute.internal container/machine-approver-controller container exited with code 2 (Error): ings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nW0409 20:15:22.410167       1 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest\nI0409 20:15:22.460764       1 main.go:147] CSR csr-ctmgz added\nI0409 20:15:22.460838       1 main.go:150] CSR csr-ctmgz is already approved\nI0409 20:15:22.460893       1 main.go:147] CSR csr-m7nmj added\nI0409 20:15:22.460922       1 main.go:150] CSR csr-m7nmj is already approved\nI0409 20:15:22.460951       1 main.go:147] CSR csr-r9vlj added\nI0409 20:15:22.460975       1 main.go:150] CSR csr-r9vlj is already approved\nI0409 20:15:22.461005       1 main.go:147] CSR csr-zkmv9 added\nI0409 20:15:22.461027       1 main.go:150] CSR csr-zkmv9 is already approved\nI0409 20:15:22.461053       1 main.go:147] CSR csr-97knb added\nI0409 20:15:22.461074       1 main.go:150] CSR csr-97knb is already approved\nI0409 20:15:22.463389       1 main.go:147] CSR csr-68htw added\nI0409 20:15:22.463443       1 main.go:150] CSR csr-68htw is already approved\nI0409 20:15:22.463478       1 main.go:147] CSR csr-lw466 added\nI0409 20:15:22.463501       1 main.go:150] CSR csr-lw466 is already approved\nI0409 20:15:22.463542       1 main.go:147] CSR csr-qx4bx added\nI0409 20:15:22.463565       1 main.go:150] CSR csr-qx4bx is already approved\nI0409 20:15:22.463594       1 main.go:147] CSR csr-vtxhs added\nI0409 20:15:22.463615       1 main.go:150] CSR csr-vtxhs is already approved\nI0409 20:15:22.463657       1 main.go:147] CSR csr-zqpsm added\nI0409 20:15:22.463680       1 main.go:150] CSR csr-zqpsm is already approved\nI0409 20:15:22.463706       1 main.go:147] CSR csr-zxsbb added\nI0409 20:15:22.464917       1 main.go:150] CSR csr-zxsbb is already approved\nI0409 20:15:22.464974       1 main.go:147] CSR csr-4r696 added\nI0409 20:15:22.465001       1 main.go:150] CSR csr-4r696 is already approved\n
Apr 09 20:21:23.784 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-59c7cf7589-4d57z node/ip-10-0-137-160.us-west-1.compute.internal container/cluster-storage-operator container exited with code 1 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-59c7cf7589-4d57z_849e8640-1271-4fa1-a0f4-6061753181d0/cluster-storage-operator/0.log": lstat /var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-59c7cf7589-4d57z_849e8640-1271-4fa1-a0f4-6061753181d0/cluster-storage-operator/0.log: no such file or directory
Apr 09 20:21:48.192 E ns/openshift-cloud-credential-operator pod/pod-identity-webhook-848976b9cf-j9cw2 node/ip-10-0-137-160.us-west-1.compute.internal container/pod-identity-webhook container exited with code 137 (Error): 
Apr 09 20:23:02.289 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusK8SFailed: Failed to rollout the stack. Error: running task Updating Prometheus-k8s failed: reconciling Prometheus rules PrometheusRule failed: updating PrometheusRule object failed: Internal error occurred: failed calling webhook "prometheusrules.openshift.io": Post "https://prometheus-operator.openshift-monitoring.svc:8080/admission-prometheusrules/validate?timeout=5s": x509: certificate signed by unknown authority
Apr 09 20:23:43.312 E ns/openshift-monitoring pod/prometheus-adapter-d5bd6bc75-czlzv node/ip-10-0-187-40.us-west-1.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0409 20:11:02.846462       1 adapter.go:98] successfully using in-cluster auth\nI0409 20:11:03.666329       1 dynamic_cafile_content.go:167] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0409 20:11:03.666442       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0409 20:11:03.668809       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0409 20:11:03.669058       1 secure_serving.go:197] Serving securely on [::]:6443\nI0409 20:11:03.669238       1 tlsconfig.go:240] Starting DynamicServingCertificateController\n
Apr 09 20:23:43.339 E ns/openshift-marketplace pod/certified-operators-jnfwz node/ip-10-0-187-40.us-west-1.compute.internal container/registry-server container exited with code 2 (Error): 
Apr 09 20:23:44.416 E ns/openshift-monitoring pod/thanos-querier-cb79fb999-nx5df node/ip-10-0-187-40.us-west-1.compute.internal container/oauth-proxy container exited with code 2 (Error): 2021/04/09 19:57:44 provider.go:120: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2021/04/09 19:57:44 provider.go:125: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2021/04/09 19:57:44 provider.go:314: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2021/04/09 19:57:44 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9090/"\n2021/04/09 19:57:44 oauthproxy.go:224: compiled skip-auth-regex => "^/metrics"\n2021/04/09 19:57:44 oauthproxy.go:230: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2021/04/09 19:57:44 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2021/04/09 19:57:44 main.go:156: using htpasswd file /etc/proxy/htpasswd/auth\nI0409 19:57:44.258861       1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2021/04/09 19:57:44 http.go:107: HTTPS: listening on [::]:9091\n