ResultSUCCESS
Tests 4 failed / 26 succeeded
Started2020-07-10 10:47
Elapsed1h16m
Work namespaceci-op-5884d4q8
Refs master:af93d88b
141:2d4a39d3
podab167fcd-c29a-11ea-a75e-0a580a80070b
repoopenshift/insights-operator
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 34m45s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 31s of 30m0s (2%), this is currently sufficient to pass the test/job but not considered completely correct:

Jul 10 11:39:08.385 E ns/e2e-k8s-service-lb-available-8722 svc/service-test Service stopped responding to GET requests over new connections
Jul 10 11:39:09.385 - 8s    E ns/e2e-k8s-service-lb-available-8722 svc/service-test Service is not responding to GET requests over new connections
Jul 10 11:39:16.385 E ns/e2e-k8s-service-lb-available-8722 svc/service-test Service stopped responding to GET requests on reused connections
Jul 10 11:39:17.385 - 10s   E ns/e2e-k8s-service-lb-available-8722 svc/service-test Service is not responding to GET requests on reused connections
Jul 10 11:39:18.433 I ns/e2e-k8s-service-lb-available-8722 svc/service-test Service started responding to GET requests over new connections
Jul 10 11:39:28.220 I ns/e2e-k8s-service-lb-available-8722 svc/service-test Service started responding to GET requests on reused connections
Jul 10 11:41:46.385 E ns/e2e-k8s-service-lb-available-8722 svc/service-test Service stopped responding to GET requests over new connections
Jul 10 11:41:46.447 I ns/e2e-k8s-service-lb-available-8722 svc/service-test Service started responding to GET requests over new connections
Jul 10 11:41:58.385 E ns/e2e-k8s-service-lb-available-8722 svc/service-test Service stopped responding to GET requests over new connections
Jul 10 11:41:59.385 - 6s    E ns/e2e-k8s-service-lb-available-8722 svc/service-test Service is not responding to GET requests over new connections
Jul 10 11:42:05.965 I ns/e2e-k8s-service-lb-available-8722 svc/service-test Service started responding to GET requests over new connections
Jul 10 11:44:44.385 E ns/e2e-k8s-service-lb-available-8722 svc/service-test Service stopped responding to GET requests over new connections
Jul 10 11:44:44.431 I ns/e2e-k8s-service-lb-available-8722 svc/service-test Service started responding to GET requests over new connections
Jul 10 11:47:54.386 E ns/e2e-k8s-service-lb-available-8722 svc/service-test Service stopped responding to GET requests on reused connections
Jul 10 11:47:54.426 I ns/e2e-k8s-service-lb-available-8722 svc/service-test Service started responding to GET requests on reused connections
Jul 10 11:48:02.385 E ns/e2e-k8s-service-lb-available-8722 svc/service-test Service stopped responding to GET requests over new connections
Jul 10 11:48:02.427 I ns/e2e-k8s-service-lb-available-8722 svc/service-test Service started responding to GET requests over new connections
				from junit_upgrade_1594382094.xml

Filter through log files


Cluster upgrade Kubernetes APIs remain available 34m15s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 4s of 34m15s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Jul 10 11:16:15.263 E kube-apiserver Kube API started failing: Get https://api.ci-op-5884d4q8-35670.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 18.235.146.98:6443: connect: connection refused
Jul 10 11:16:16.232 E kube-apiserver Kube API is not responding to GET requests
Jul 10 11:16:16.281 I kube-apiserver Kube API started responding to GET requests
Jul 10 11:42:54.310 E kube-apiserver Kube API started failing: Get https://api.ci-op-5884d4q8-35670.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 18.235.146.98:6443: connect: connection refused
Jul 10 11:42:55.232 E kube-apiserver Kube API is not responding to GET requests
Jul 10 11:42:55.264 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1594382094.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 34m15s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 54s of 34m15s (3%), this is currently sufficient to pass the test/job but not considered completely correct:

Jul 10 11:41:59.234 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-5884d4q8-35670.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 10 11:42:00.234 - 54s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 10 11:42:54.603 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1594382094.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 39m36s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
82 error level events were detected during this test run:

Jul 10 11:16:15.260 E kube-apiserver failed contacting the API: Get https://api.ci-op-5884d4q8-35670.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=24180&timeout=9m58s&timeoutSeconds=598&watch=true: dial tcp 52.201.24.244:6443: connect: connection refused
Jul 10 11:18:24.227 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-227.ec2.internal node/ip-10-0-147-227.ec2.internal container/kube-controller-manager container exited with code 255 (Error): rue: dial tcp [::1]:6443: connect: connection refused\nE0710 11:18:23.020260       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/console.openshift.io/v1/consoleexternalloglinks?allowWatchBookmarks=true&resourceVersion=17874&timeout=5m19s&timeoutSeconds=319&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0710 11:18:23.021399       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/servicecas?allowWatchBookmarks=true&resourceVersion=17850&timeout=7m37s&timeoutSeconds=457&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0710 11:18:23.022305       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=17733&timeout=6m18s&timeoutSeconds=378&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0710 11:18:23.025627       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/kubeschedulers?allowWatchBookmarks=true&resourceVersion=21783&timeout=9m53s&timeoutSeconds=593&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0710 11:18:23.029044       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/consoles?allowWatchBookmarks=true&resourceVersion=19438&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0710 11:18:23.575653       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0710 11:18:23.575741       1 controllermanager.go:291] leaderelection lost\n
Jul 10 11:18:48.292 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-227.ec2.internal node/ip-10-0-147-227.ec2.internal container/setup init container exited with code 124 (Error): ................................................................................
Jul 10 11:19:41.205 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-cluster-version/cluster-version-operator" (5 of 602)
Jul 10 11:20:58.939 E ns/openshift-machine-api pod/machine-api-operator-67949b58b5-bk4s6 node/ip-10-0-167-50.ec2.internal container/machine-api-operator container exited with code 2 (Error): 
Jul 10 11:21:15.489 E kube-apiserver Kube API started failing: Get https://api.ci-op-5884d4q8-35670.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 10 11:21:47.063 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-253-163.ec2.internal node/ip-10-0-253-163.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done'\n++ ss -Htanop '(' sport = 10357 ')'\n/bin/bash: ss: command not found\n+ '[' -n '' ']'\n+ echo -n 'Waiting kube-apiserver to respond.'\nWaiting kube-apiserver to respond.+ tries=0\n+ curl --output /dev/null --silent -k https://localhost:6443/version\n+ echo\n\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml\nI0710 11:21:46.178063       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0710 11:21:46.179629       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0710 11:21:46.179630       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0710 11:21:46.180771       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 10 11:23:12.190 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-227.ec2.internal node/ip-10-0-147-227.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done'\n++ ss -Htanop '(' sport = 10357 ')'\n/bin/bash: ss: command not found\n+ '[' -n '' ']'\n+ echo -n 'Waiting kube-apiserver to respond.'\nWaiting kube-apiserver to respond.+ tries=0\n+ curl --output /dev/null --silent -k https://localhost:6443/version\n+ echo\n\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml\nI0710 11:23:11.344422       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0710 11:23:11.345961       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0710 11:23:11.346007       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0710 11:23:11.346954       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 10 11:23:14.497 E ns/openshift-machine-api pod/machine-api-controllers-67d4df4d95-cs2l4 node/ip-10-0-167-50.ec2.internal container/machineset-controller container exited with code 1 (Error): 
Jul 10 11:23:36.523 E ns/openshift-cluster-machine-approver pod/machine-approver-56df4f654b-hpzpc node/ip-10-0-253-163.ec2.internal container/machine-approver-controller container exited with code 2 (Error): 981&timeoutSeconds=361&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0710 11:22:42.364012       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=24451&timeoutSeconds=478&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0710 11:22:42.364084       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=24405&timeoutSeconds=323&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0710 11:22:43.364580       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=24451&timeoutSeconds=317&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0710 11:22:43.366003       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=24405&timeoutSeconds=507&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0710 11:22:47.563396       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: unknown (get clusteroperators.config.openshift.io)\nE0710 11:22:47.563506       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io)\n
Jul 10 11:24:01.853 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-747d55c4b6-f7vh9 node/ip-10-0-165-11.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Jul 10 11:24:08.448 E ns/openshift-monitoring pod/node-exporter-lbk9w node/ip-10-0-167-50.ec2.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-07-10T11:06:44.258Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-07-10T11:06:44.259Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-07-10T11:06:44.259Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Jul 10 11:24:20.851 E ns/openshift-monitoring pod/kube-state-metrics-5bf59f5976-8bmmw node/ip-10-0-165-11.ec2.internal container/kube-state-metrics container exited with code 2 (Error): 
Jul 10 11:24:36.754 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-167-50.ec2.internal node/ip-10-0-167-50.ec2.internal container/cluster-policy-controller container exited with code 255 (Error): + timeout 3m /bin/bash -exuo pipefail -c 'while [ -n "$(ss -Htanop \( sport = 10357 \))" ]; do sleep 1; done'\n++ ss -Htanop '(' sport = 10357 ')'\n/bin/bash: ss: command not found\n+ '[' -n '' ']'\n+ echo -n 'Waiting kube-apiserver to respond.'\nWaiting kube-apiserver to respond.+ tries=0\n+ curl --output /dev/null --silent -k https://localhost:6443/version\n+ echo\n\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml\nI0710 11:24:35.874116       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0710 11:24:35.875657       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0710 11:24:35.875682       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0710 11:24:35.876212       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 10 11:24:58.862 E ns/openshift-monitoring pod/thanos-querier-5d9f9889fb-j84j4 node/ip-10-0-241-91.ec2.internal container/oauth-proxy container exited with code 2 (Error): 2020/07/10 11:12:25 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/10 11:12:25 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/10 11:12:25 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/10 11:12:25 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/10 11:12:25 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/10 11:12:25 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/10 11:12:25 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/10 11:12:25 main.go:155: using htpasswd file /etc/proxy/htpasswd/auth\nI0710 11:12:25.596488       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/10 11:12:25 http.go:107: HTTPS: listening on [::]:9091\n2020/07/10 11:18:26 oauthproxy.go:782: basicauth: 10.128.0.10:57608 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:19:26 oauthproxy.go:782: basicauth: 10.128.0.10:59660 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:21:11 oauthproxy.go:782: basicauth: 10.130.0.50:35728 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:22:10 oauthproxy.go:782: basicauth: 10.130.0.50:43580 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:23:10 oauthproxy.go:782: basicauth: 10.130.0.50:46210 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:24:10 oauthproxy.go:782: basicauth: 10.130.0.50:49066 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 10 11:24:58.880 E ns/openshift-monitoring pod/telemeter-client-6b8b456cfc-kwfbt node/ip-10-0-241-91.ec2.internal container/telemeter-client container exited with code 2 (Error): 
Jul 10 11:24:58.880 E ns/openshift-monitoring pod/telemeter-client-6b8b456cfc-kwfbt node/ip-10-0-241-91.ec2.internal container/reload container exited with code 2 (Error): 
Jul 10 11:25:05.030 E ns/openshift-monitoring pod/prometheus-adapter-798c656fbd-m7wnf node/ip-10-0-165-11.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0710 11:11:54.227076       1 adapter.go:94] successfully using in-cluster auth\nI0710 11:11:54.879931       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0710 11:11:54.879982       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0710 11:11:54.880301       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0710 11:11:54.881125       1 secure_serving.go:178] Serving securely on [::]:6443\nI0710 11:11:54.881325       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Jul 10 11:25:10.158 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-241-91.ec2.internal container/prometheus container exited with code 1 (Error): .0.0.1:9090\nlevel=info ts=2020-07-10T11:25:06.745Z caller=head.go:645 component=tsdb msg="Replaying WAL and on-disk memory mappable chunks if any, this may take a while"\nlevel=info ts=2020-07-10T11:25:06.745Z caller=head.go:706 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-10T11:25:06.745Z caller=head.go:709 component=tsdb msg="WAL replay completed" duration=287.219µs\nlevel=info ts=2020-07-10T11:25:06.746Z caller=main.go:694 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-10T11:25:06.746Z caller=main.go:695 msg="TSDB started"\nlevel=info ts=2020-07-10T11:25:06.747Z caller=main.go:799 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-10T11:25:06.747Z caller=main.go:547 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-10T11:25:06.747Z caller=main.go:561 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-10T11:25:06.747Z caller=main.go:583 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-10T11:25:06.747Z caller=main.go:557 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-10T11:25:06.747Z caller=main.go:543 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-10T11:25:06.747Z caller=manager.go:882 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-10T11:25:06.747Z caller=main.go:577 msg="Scrape manager stopped"\nlevel=info ts=2020-07-10T11:25:06.747Z caller=manager.go:892 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-10T11:25:06.747Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-10T11:25:06.748Z caller=main.go:749 msg="Notifier manager stopped"\nlevel=error ts=2020-07-10
Jul 10 11:25:17.220 E ns/openshift-marketplace pod/certified-operators-6cc9489b7c-mf2fl node/ip-10-0-241-91.ec2.internal container/certified-operators container exited with code 2 (Error): 
Jul 10 11:25:18.075 E ns/openshift-monitoring pod/grafana-847f549fbd-nn8x9 node/ip-10-0-150-236.ec2.internal container/grafana-proxy container exited with code 2 (Error): 
Jul 10 11:25:20.241 E ns/openshift-monitoring pod/node-exporter-wrpmc node/ip-10-0-241-91.ec2.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-07-10T11:10:57.846Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-07-10T11:10:57.846Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Jul 10 11:25:26.115 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-165-11.ec2.internal container/prometheus container exited with code 1 (Error): .0.0.1:9090\nlevel=info ts=2020-07-10T11:25:22.780Z caller=head.go:645 component=tsdb msg="Replaying WAL and on-disk memory mappable chunks if any, this may take a while"\nlevel=info ts=2020-07-10T11:25:22.780Z caller=head.go:706 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-10T11:25:22.780Z caller=head.go:709 component=tsdb msg="WAL replay completed" duration=195.867µs\nlevel=info ts=2020-07-10T11:25:22.781Z caller=main.go:694 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-10T11:25:22.781Z caller=main.go:695 msg="TSDB started"\nlevel=info ts=2020-07-10T11:25:22.781Z caller=main.go:799 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-10T11:25:22.781Z caller=main.go:547 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-10T11:25:22.781Z caller=main.go:561 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-10T11:25:22.781Z caller=main.go:583 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-10T11:25:22.781Z caller=main.go:557 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-10T11:25:22.781Z caller=main.go:543 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-10T11:25:22.781Z caller=manager.go:882 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-10T11:25:22.781Z caller=manager.go:892 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-10T11:25:22.781Z caller=main.go:577 msg="Scrape manager stopped"\nlevel=info ts=2020-07-10T11:25:22.783Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-10T11:25:22.783Z caller=main.go:749 msg="Notifier manager stopped"\nlevel=error ts=2020-07-10
Jul 10 11:25:27.790 E ns/openshift-monitoring pod/node-exporter-2ng7n node/ip-10-0-147-227.ec2.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-07-10T11:06:39.201Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-07-10T11:06:39.201Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-07-10T11:06:39.201Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-07-10T11:06:39.201Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-07-10T11:06:39.201Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-07-10T11:06:39.201Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-07-10T11:06:39.201Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-07-10T11:06:39.202Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-07-10T11:06:39.202Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Jul 10 11:25:30.811 E ns/openshift-cloud-credential-operator pod/pod-identity-webhook-5ff4cb959c-j9whm node/ip-10-0-147-227.ec2.internal container/pod-identity-webhook container exited with code 137 (Error): 
Jul 10 11:26:01.341 E ns/openshift-console pod/console-75668b7456-22z4d node/ip-10-0-253-163.ec2.internal container/console container exited with code 2 (Error): 2020-07-10T11:15:21Z cmd/main: cookies are secure!\n2020-07-10T11:15:21Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-07-10T11:15:31Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-07-10T11:15:41Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-07-10T11:15:51Z cmd/main: Binding to [::]:8443...\n2020-07-10T11:15:51Z cmd/main: using TLS\n
Jul 10 11:27:09.262 E ns/openshift-sdn pod/sdn-controller-8c7cw node/ip-10-0-167-50.ec2.internal container/sdn-controller container exited with code 2 (Error): I0710 11:00:57.204216       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0710 11:02:54.773938       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\nE0710 11:06:08.974860       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-5884d4q8-35670.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Jul 10 11:27:13.106 E ns/openshift-sdn pod/sdn-controller-6f4bc node/ip-10-0-147-227.ec2.internal container/sdn-controller container exited with code 2 (Error): I0710 11:00:52.771102       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0710 11:06:08.975926       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-5884d4q8-35670.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Jul 10 11:27:23.586 E ns/openshift-multus pod/multus-m4vk7 node/ip-10-0-241-91.ec2.internal container/kube-multus container exited with code 137 (Error): 
Jul 10 11:27:23.638 E ns/openshift-multus pod/multus-admission-controller-9zhzs node/ip-10-0-253-163.ec2.internal container/multus-admission-controller container exited with code 137 (Error): 
Jul 10 11:27:26.339 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-167-50.ec2.internal node/ip-10-0-167-50.ec2.internal container/setup init container exited with code 124 (Error): ................................................................................
Jul 10 11:28:14.595 E ns/openshift-multus pod/multus-fqvcl node/ip-10-0-165-11.ec2.internal container/kube-multus container exited with code 137 (Error): 
Jul 10 11:28:24.628 E ns/openshift-sdn pod/ovs-8dwnz node/ip-10-0-165-11.ec2.internal container/openvswitch container exited with code 137 (Error): T11:27:23.862Z|00175|connmgr|INFO|br0<->unix#436: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:23.890Z|00176|connmgr|INFO|br0<->unix#440: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:23.914Z|00177|connmgr|INFO|br0<->unix#443: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:23.937Z|00178|connmgr|INFO|br0<->unix#446: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:23.960Z|00179|connmgr|INFO|br0<->unix#449: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:24.000Z|00180|connmgr|INFO|br0<->unix#452: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:24.026Z|00181|connmgr|INFO|br0<->unix#455: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:24.051Z|00182|connmgr|INFO|br0<->unix#458: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:24.080Z|00183|connmgr|INFO|br0<->unix#461: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:24.109Z|00184|connmgr|INFO|br0<->unix#464: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:24.131Z|00185|connmgr|INFO|br0<->unix#467: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:24.162Z|00186|connmgr|INFO|br0<->unix#470: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:24.198Z|00187|connmgr|INFO|br0<->unix#473: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:24.234Z|00188|connmgr|INFO|br0<->unix#476: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:24.267Z|00189|connmgr|INFO|br0<->unix#479: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:24.681Z|00190|connmgr|INFO|br0<->unix#482: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-10T11:27:24.709Z|00191|connmgr|INFO|br0<->unix#485: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-10T11:27:24.732Z|00192|bridge|INFO|bridge br0: deleted interface vethf8287af8 on port 3\n2020-07-10T11:27:26.904Z|00193|bridge|INFO|bridge br0: added interface veth3d3880fb on port 28\n2020-07-10T11:27:26.935Z|00194|connmgr|INFO|br0<->unix#488: 5 flow_mods in the last 0 s (5 adds)\n2020-07-10T11:27:26.973Z|00195|connmgr|INFO|br0<->unix#491: 2 flow_mods in the last 0 s (2 deletes)\n
Jul 10 11:29:02.794 E ns/openshift-multus pod/multus-fglhd node/ip-10-0-167-50.ec2.internal container/kube-multus container exited with code 137 (Error): 
Jul 10 11:29:22.863 E ns/openshift-sdn pod/ovs-ld4rs node/ip-10-0-167-50.ec2.internal container/openvswitch container exited with code 137 (Error):  s (1 adds)\n2020-07-10T11:27:34.028Z|00480|connmgr|INFO|br0<->unix#1154: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:34.053Z|00481|connmgr|INFO|br0<->unix#1157: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:59.073Z|00482|connmgr|INFO|br0<->unix#1160: 2 flow_mods in the last 0 s (2 deletes)\n\n==> /host/var/log/openvswitch/ovsdb-server.log <==\n2020-07-10T11:27:59.082Z|00099|jsonrpc|WARN|unix#1095: send error: Broken pipe\n2020-07-10T11:27:59.082Z|00100|reconnect|WARN|unix#1095: connection dropped (Broken pipe)\n\n==> /host/var/log/openvswitch/ovs-vswitchd.log <==\n2020-07-10T11:27:59.131Z|00483|connmgr|INFO|br0<->unix#1163: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-10T11:27:59.162Z|00484|bridge|INFO|bridge br0: deleted interface veth0c26daa9 on port 3\n2020-07-10T11:28:10.020Z|00485|bridge|INFO|bridge br0: added interface veth78466c0e on port 78\n2020-07-10T11:28:10.053Z|00486|connmgr|INFO|br0<->unix#1169: 5 flow_mods in the last 0 s (5 adds)\n2020-07-10T11:28:10.090Z|00487|connmgr|INFO|br0<->unix#1172: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-10T11:28:27.915Z|00488|bridge|INFO|bridge br0: added interface veth65143ab0 on port 79\n\n==> /host/var/log/openvswitch/ovsdb-server.log <==\n2020-07-10T11:28:27.933Z|00101|jsonrpc|WARN|unix#1113: send error: Broken pipe\n2020-07-10T11:28:27.933Z|00102|reconnect|WARN|unix#1113: connection dropped (Broken pipe)\n\n==> /host/var/log/openvswitch/ovs-vswitchd.log <==\n2020-07-10T11:28:27.957Z|00489|connmgr|INFO|br0<->unix#1175: 5 flow_mods in the last 0 s (5 adds)\n2020-07-10T11:28:28.010Z|00490|connmgr|INFO|br0<->unix#1179: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-07-10T11:28:28.016Z|00491|connmgr|INFO|br0<->unix#1181: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-10T11:28:32.726Z|00492|connmgr|INFO|br0<->unix#1184: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-10T11:28:32.752Z|00493|connmgr|INFO|br0<->unix#1187: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-10T11:28:32.774Z|00494|bridge|INFO|bridge br0: deleted interface veth65143ab0 on port 79\n
Jul 10 11:30:10.266 E ns/openshift-sdn pod/ovs-mfh52 node/ip-10-0-253-163.ec2.internal container/openvswitch container exited with code 137 (Error): |br0<->unix#1254: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:28:27.301Z|00533|connmgr|INFO|br0<->unix#1257: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:28:27.326Z|00534|connmgr|INFO|br0<->unix#1260: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:28:27.343Z|00535|connmgr|INFO|br0<->unix#1263: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:28:27.371Z|00536|connmgr|INFO|br0<->unix#1266: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:28:27.402Z|00537|connmgr|INFO|br0<->unix#1269: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:28:27.447Z|00538|connmgr|INFO|br0<->unix#1272: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:28:27.463Z|00539|connmgr|INFO|br0<->unix#1275: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:28:27.491Z|00540|connmgr|INFO|br0<->unix#1278: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:28:27.513Z|00541|connmgr|INFO|br0<->unix#1281: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:28:27.539Z|00542|connmgr|INFO|br0<->unix#1284: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:28:27.551Z|00543|connmgr|INFO|br0<->unix#1287: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:28:27.571Z|00544|connmgr|INFO|br0<->unix#1290: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:28:27.587Z|00545|connmgr|INFO|br0<->unix#1293: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:28:27.605Z|00546|connmgr|INFO|br0<->unix#1296: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:28:27.618Z|00547|connmgr|INFO|br0<->unix#1299: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:28:27.636Z|00548|connmgr|INFO|br0<->unix#1302: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:28:27.653Z|00549|connmgr|INFO|br0<->unix#1305: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:28:27.664Z|00550|connmgr|INFO|br0<->unix#1308: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:28:27.672Z|00551|connmgr|INFO|br0<->unix#1310: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:28:27.692Z|00552|connmgr|INFO|br0<->unix#1313: 1 flow_mods in the last 0 s (1 deletes)\n
Jul 10 11:30:23.814 E ns/openshift-multus pod/multus-nm5dq node/ip-10-0-150-236.ec2.internal container/kube-multus container exited with code 137 (Error): 
Jul 10 11:31:02.710 E ns/openshift-sdn pod/ovs-lwngp node/ip-10-0-147-227.ec2.internal container/openvswitch container exited with code 137 (Error): st 0 s (3 adds)\n2020-07-10T11:27:03.216Z|00378|connmgr|INFO|br0<->unix#921: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:03.217Z|00379|connmgr|INFO|br0<->unix#923: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:03.253Z|00380|connmgr|INFO|br0<->unix#927: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:03.266Z|00381|connmgr|INFO|br0<->unix#930: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:03.309Z|00382|connmgr|INFO|br0<->unix#933: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:03.310Z|00383|connmgr|INFO|br0<->unix#935: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:03.343Z|00384|connmgr|INFO|br0<->unix#938: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:03.366Z|00385|connmgr|INFO|br0<->unix#941: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:03.387Z|00386|connmgr|INFO|br0<->unix#944: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:03.412Z|00387|connmgr|INFO|br0<->unix#947: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:03.443Z|00388|connmgr|INFO|br0<->unix#950: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:03.465Z|00389|connmgr|INFO|br0<->unix#953: 1 flow_mods in the last 0 s (1 adds)\n\n==> /host/var/log/openvswitch/ovsdb-server.log <==\n2020-07-10T11:28:02.521Z|00051|jsonrpc|WARN|unix#964: send error: Broken pipe\n2020-07-10T11:28:02.521Z|00052|reconnect|WARN|unix#964: connection dropped (Broken pipe)\n\n==> /host/var/log/openvswitch/ovs-vswitchd.log <==\n2020-07-10T11:28:43.952Z|00390|connmgr|INFO|br0<->unix#965: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-10T11:28:43.978Z|00391|connmgr|INFO|br0<->unix#968: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-10T11:28:43.999Z|00392|bridge|INFO|bridge br0: deleted interface vethf98206af on port 6\n2020-07-10T11:28:51.548Z|00393|bridge|INFO|bridge br0: added interface veth2f658120 on port 58\n2020-07-10T11:28:51.575Z|00394|connmgr|INFO|br0<->unix#971: 5 flow_mods in the last 0 s (5 adds)\n2020-07-10T11:28:51.629Z|00395|connmgr|INFO|br0<->unix#974: 2 flow_mods in the last 0 s (2 deletes)\n
Jul 10 11:31:11.779 E ns/openshift-multus pod/multus-hht2g node/ip-10-0-147-227.ec2.internal container/kube-multus container exited with code 137 (Error): 
Jul 10 11:31:54.050 E ns/openshift-sdn pod/ovs-wh6p4 node/ip-10-0-150-236.ec2.internal container/openvswitch container exited with code 137 (Error): -07-10T11:26:58.765Z|00149|connmgr|INFO|br0<->unix#366: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-10T11:26:58.794Z|00150|connmgr|INFO|br0<->unix#369: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-10T11:26:58.819Z|00151|bridge|INFO|bridge br0: deleted interface veth1a8ded28 on port 4\n2020-07-10T11:27:11.892Z|00152|bridge|INFO|bridge br0: added interface vethd055b6b7 on port 25\n2020-07-10T11:27:11.923Z|00153|connmgr|INFO|br0<->unix#375: 5 flow_mods in the last 0 s (5 adds)\n2020-07-10T11:27:11.962Z|00154|connmgr|INFO|br0<->unix#378: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-10T11:27:53.180Z|00155|connmgr|INFO|br0<->unix#388: 2 flow_mods in the last 0 s (2 adds)\n2020-07-10T11:27:53.237Z|00156|connmgr|INFO|br0<->unix#392: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:53.304Z|00157|connmgr|INFO|br0<->unix#404: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:53.331Z|00158|connmgr|INFO|br0<->unix#408: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:53.356Z|00159|connmgr|INFO|br0<->unix#411: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-10T11:27:53.456Z|00160|connmgr|INFO|br0<->unix#414: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:53.486Z|00161|connmgr|INFO|br0<->unix#417: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:53.510Z|00162|connmgr|INFO|br0<->unix#420: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:53.543Z|00163|connmgr|INFO|br0<->unix#423: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:53.569Z|00164|connmgr|INFO|br0<->unix#426: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:53.597Z|00165|connmgr|INFO|br0<->unix#429: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:53.622Z|00166|connmgr|INFO|br0<->unix#432: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:53.648Z|00167|connmgr|INFO|br0<->unix#435: 1 flow_mods in the last 0 s (1 adds)\n2020-07-10T11:27:53.673Z|00168|connmgr|INFO|br0<->unix#438: 3 flow_mods in the last 0 s (3 adds)\n2020-07-10T11:27:53.697Z|00169|connmgr|INFO|br0<->unix#441: 1 flow_mods in the last 0 s (1 adds)\n
Jul 10 11:35:16.425 E ns/openshift-machine-config-operator pod/machine-config-daemon-wttfv node/ip-10-0-147-227.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 10 11:35:21.263 E ns/openshift-machine-config-operator pod/machine-config-daemon-ldf2x node/ip-10-0-253-163.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 10 11:35:44.791 E ns/openshift-machine-config-operator pod/machine-config-daemon-ptx47 node/ip-10-0-241-91.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 10 11:35:52.587 E ns/openshift-machine-config-operator pod/machine-config-daemon-rprh4 node/ip-10-0-150-236.ec2.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 10 11:36:00.568 E ns/openshift-machine-config-operator pod/machine-config-controller-79676c7b8b-89b24 node/ip-10-0-147-227.ec2.internal container/machine-config-controller container exited with code 2 (Error): de_controller.go:445] Pool worker: node ip-10-0-165-11.ec2.internal is now reporting ready\nI0710 11:11:44.377040       1 node_controller.go:462] Pool worker: node ip-10-0-150-236.ec2.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-933d20437cd01ee08f7fdf6cab6bca8c\nI0710 11:11:44.377064       1 node_controller.go:462] Pool worker: node ip-10-0-150-236.ec2.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-933d20437cd01ee08f7fdf6cab6bca8c\nI0710 11:11:44.377069       1 node_controller.go:462] Pool worker: node ip-10-0-150-236.ec2.internal changed machineconfiguration.openshift.io/state = Done\nE0710 11:11:45.311982       1 render_controller.go:459] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again\nI0710 11:11:45.312006       1 render_controller.go:376] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again\nI0710 11:11:46.810133       1 node_controller.go:445] Pool worker: node ip-10-0-241-91.ec2.internal is now reporting ready\nI0710 11:11:54.320401       1 node_controller.go:445] Pool worker: node ip-10-0-150-236.ec2.internal is now reporting ready\nE0710 11:11:55.327776       1 render_controller.go:459] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again\nI0710 11:11:55.327803       1 render_controller.go:376] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again\n
Jul 10 11:37:58.855 E ns/openshift-machine-config-operator pod/machine-config-server-6p5dq node/ip-10-0-147-227.ec2.internal container/machine-config-server container exited with code 2 (Error): I0710 11:01:50.783999       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-65-gf8ff1d10-dirty (f8ff1d10d55425cd33c367c0911efcdd371125c6)\nI0710 11:01:50.784940       1 api.go:69] Launching server on :22624\nI0710 11:01:50.784982       1 api.go:69] Launching server on :22623\nI0710 11:08:10.449664       1 api.go:116] Pool worker requested by address:"10.0.213.169:14690" User-Agent:"Ignition/0.35.1"\n
Jul 10 11:38:01.509 E ns/openshift-machine-config-operator pod/machine-config-server-5h29m node/ip-10-0-167-50.ec2.internal container/machine-config-server container exited with code 2 (Error): I0710 11:01:53.213222       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-65-gf8ff1d10-dirty (f8ff1d10d55425cd33c367c0911efcdd371125c6)\nI0710 11:01:53.213821       1 api.go:69] Launching server on :22624\nI0710 11:01:53.213855       1 api.go:69] Launching server on :22623\n
Jul 10 11:38:19.706 E ns/openshift-console-operator pod/console-operator-6d65747755-8tp4w node/ip-10-0-253-163.ec2.internal container/console-operator container exited with code 1 (Error): ctor.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0710 11:38:15.835667       1 reflector.go:181] Stopping reflector *v1.ConsoleCLIDownload (10m0s) from github.com/openshift/client-go/console/informers/externalversions/factory.go:101\nI0710 11:38:15.835758       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0710 11:38:15.835842       1 reflector.go:181] Stopping reflector *v1.Console (10m0s) from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0710 11:38:15.835912       1 reflector.go:181] Stopping reflector *v1.Service (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0710 11:38:15.835994       1 reflector.go:181] Stopping reflector *v1.Proxy (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0710 11:38:15.836028       1 reflector.go:181] Stopping reflector *v1.Route (10m0s) from github.com/openshift/client-go/route/informers/externalversions/factory.go:101\nI0710 11:38:15.836048       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0710 11:38:15.836065       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0710 11:38:15.836081       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0710 11:38:15.836097       1 reflector.go:181] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0710 11:38:15.836107       1 builder.go:258] server exited\nI0710 11:38:15.836160       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nI0710 11:38:15.836167       1 reflector.go:181] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:135\nW0710 11:38:15.836231       1 builder.go:96] graceful termination failed, controllers failed with error: stopped\n
Jul 10 11:38:30.242 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-236.ec2.internal container/prometheus container exited with code 1 (Error): .0.0.1:9090\nlevel=info ts=2020-07-10T11:38:26.965Z caller=head.go:645 component=tsdb msg="Replaying WAL and on-disk memory mappable chunks if any, this may take a while"\nlevel=info ts=2020-07-10T11:38:26.965Z caller=head.go:706 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-10T11:38:26.966Z caller=head.go:709 component=tsdb msg="WAL replay completed" duration=179.631µs\nlevel=info ts=2020-07-10T11:38:26.966Z caller=main.go:694 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-10T11:38:26.966Z caller=main.go:695 msg="TSDB started"\nlevel=info ts=2020-07-10T11:38:26.967Z caller=main.go:799 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-10T11:38:26.967Z caller=main.go:547 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-10T11:38:26.967Z caller=main.go:561 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-10T11:38:26.967Z caller=main.go:583 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-10T11:38:26.967Z caller=main.go:557 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-10T11:38:26.967Z caller=manager.go:882 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-10T11:38:26.967Z caller=manager.go:892 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-10T11:38:26.967Z caller=main.go:543 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-10T11:38:26.967Z caller=main.go:577 msg="Scrape manager stopped"\nlevel=info ts=2020-07-10T11:38:26.969Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-10T11:38:26.969Z caller=main.go:749 msg="Notifier manager stopped"\nlevel=error ts=2020-07-10
Jul 10 11:38:46.130 E ns/openshift-cloud-credential-operator pod/pod-identity-webhook-6c478dbdd6-bw6nk node/ip-10-0-253-163.ec2.internal container/pod-identity-webhook container exited with code 137 (Error): 
Jul 10 11:39:18.488 - 45s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 10 11:40:07.815 E kube-apiserver failed contacting the API: Get https://api.ci-op-5884d4q8-35670.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=47722&timeout=5m7s&timeoutSeconds=307&watch=true: dial tcp 52.201.24.244:6443: connect: connection refused
Jul 10 11:40:09.064 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jul 10 11:40:18.115 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Jul 10 11:40:36.675 E ns/openshift-monitoring pod/grafana-58d9cbf6d6-bx9g4 node/ip-10-0-150-236.ec2.internal container/grafana-proxy container exited with code 2 (Error): 
Jul 10 11:40:36.705 E ns/openshift-monitoring pod/prometheus-adapter-67c47cdbf9-vhj8l node/ip-10-0-150-236.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0710 11:24:58.965116       1 adapter.go:94] successfully using in-cluster auth\nI0710 11:24:59.755561       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0710 11:24:59.755564       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0710 11:24:59.755961       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0710 11:24:59.757123       1 secure_serving.go:178] Serving securely on [::]:6443\nI0710 11:24:59.757772       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nE0710 11:27:40.656665       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0710 11:27:40.656812       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\n
Jul 10 11:40:37.693 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-236.ec2.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/07/10 11:38:28 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 10 11:40:37.693 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-236.ec2.internal container/prometheus-proxy container exited with code 2 (Error): 2020/07/10 11:38:28 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/10 11:38:28 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/10 11:38:28 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/10 11:38:28 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/10 11:38:28 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/10 11:38:28 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/10 11:38:28 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/10 11:38:28 main.go:155: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/10 11:38:28 http.go:107: HTTPS: listening on [::]:9091\nI0710 11:38:28.756110       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 10 11:40:37.693 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-150-236.ec2.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-10T11:38:27.816205848Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-07-10T11:38:27.818583562Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-10T11:38:33.057015888Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-07-10T11:38:33.057124704Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Jul 10 11:40:37.758 E ns/openshift-monitoring pod/thanos-querier-f67c4b969-kqgsx node/ip-10-0-150-236.ec2.internal container/oauth-proxy container exited with code 2 (Error): 2020/07/10 11:38:13 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/10 11:38:13 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/10 11:38:13 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/10 11:38:13 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/10 11:38:13 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/10 11:38:13 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/10 11:38:13 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/10 11:38:13 main.go:155: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/10 11:38:13 http.go:107: HTTPS: listening on [::]:9091\nI0710 11:38:13.253863       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 10 11:40:55.820 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-241-91.ec2.internal container/prometheus container exited with code 1 (Error): .0.0.1:9090\nlevel=info ts=2020-07-10T11:40:54.305Z caller=head.go:645 component=tsdb msg="Replaying WAL and on-disk memory mappable chunks if any, this may take a while"\nlevel=info ts=2020-07-10T11:40:54.306Z caller=head.go:706 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-10T11:40:54.306Z caller=head.go:709 component=tsdb msg="WAL replay completed" duration=1.131944ms\nlevel=info ts=2020-07-10T11:40:54.308Z caller=main.go:694 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-10T11:40:54.308Z caller=main.go:695 msg="TSDB started"\nlevel=info ts=2020-07-10T11:40:54.308Z caller=main.go:799 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-10T11:40:54.308Z caller=main.go:547 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-10T11:40:54.308Z caller=main.go:561 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-10T11:40:54.308Z caller=main.go:583 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-10T11:40:54.308Z caller=main.go:557 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-10T11:40:54.308Z caller=main.go:543 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-10T11:40:54.308Z caller=main.go:577 msg="Scrape manager stopped"\nlevel=info ts=2020-07-10T11:40:54.308Z caller=manager.go:882 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-10T11:40:54.308Z caller=manager.go:892 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-10T11:40:54.312Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-10T11:40:54.312Z caller=main.go:749 msg="Notifier manager stopped"\nlevel=error ts=2020-07-10
Jul 10 11:41:11.405 E ns/openshift-machine-api pod/machine-api-operator-6db6fd68c5-7bw9n node/ip-10-0-147-227.ec2.internal container/machine-api-operator container exited with code 2 (Error): 
Jul 10 11:41:19.831 E ns/e2e-k8s-service-lb-available-8722 pod/service-test-njlb5 node/ip-10-0-150-236.ec2.internal container/netexec container exited with code 2 (Error): 
Jul 10 11:41:29.485 E ns/openshift-console pod/console-f996df5d4-7vtq7 node/ip-10-0-147-227.ec2.internal container/console container exited with code 2 (Error): 2020-07-10T11:25:34Z cmd/main: cookies are secure!\n2020-07-10T11:25:34Z cmd/main: Binding to [::]:8443...\n2020-07-10T11:25:34Z cmd/main: using TLS\n
Jul 10 11:42:03.489 - 44s   E openshift-apiserver OpenShift API is not responding to GET requests
Jul 10 11:42:50.602 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jul 10 11:43:21.730 E ns/openshift-monitoring pod/thanos-querier-f67c4b969-2d68c node/ip-10-0-165-11.ec2.internal container/oauth-proxy container exited with code 2 (Error): 0 11:24:29 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/10 11:24:29 main.go:155: using htpasswd file /etc/proxy/htpasswd/auth\nI0710 11:24:29.573500       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/10 11:24:29 http.go:107: HTTPS: listening on [::]:9091\n2020/07/10 11:26:10 oauthproxy.go:782: basicauth: 10.130.0.50:53686 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:28:10 oauthproxy.go:782: basicauth: 10.130.0.50:33966 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:29:10 oauthproxy.go:782: basicauth: 10.130.0.50:36540 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:30:10 oauthproxy.go:782: basicauth: 10.130.0.50:38984 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:31:10 oauthproxy.go:782: basicauth: 10.130.0.50:41436 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:33:10 oauthproxy.go:782: basicauth: 10.130.0.50:46324 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:34:10 oauthproxy.go:782: basicauth: 10.130.0.50:48822 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:38:10 oauthproxy.go:782: basicauth: 10.130.0.50:58862 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:39:15 oauthproxy.go:782: basicauth: 10.130.0.50:36224 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:40:12 oauthproxy.go:782: basicauth: 10.130.0.50:43026 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/10 11:41:10 oauthproxy.go:782: basicauth: 10.130.0.50:46856 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 10 11:43:22.831 E ns/openshift-monitoring pod/telemeter-client-64d9ffc688-xkb4d node/ip-10-0-165-11.ec2.internal container/telemeter-client container exited with code 2 (Error): 
Jul 10 11:43:22.831 E ns/openshift-monitoring pod/telemeter-client-64d9ffc688-xkb4d node/ip-10-0-165-11.ec2.internal container/reload container exited with code 2 (Error): 
Jul 10 11:43:22.935 E ns/openshift-marketplace pod/redhat-operators-565654d6b4-lzhbx node/ip-10-0-165-11.ec2.internal container/redhat-operators container exited with code 2 (Error): 
Jul 10 11:43:23.884 E ns/openshift-marketplace pod/certified-operators-85f79b46c4-pntfq node/ip-10-0-165-11.ec2.internal container/certified-operators container exited with code 2 (Error): 
Jul 10 11:43:23.926 E ns/openshift-monitoring pod/prometheus-adapter-67c47cdbf9-dwhx4 node/ip-10-0-165-11.ec2.internal container/prometheus-adapter container exited with code 2 (Error): I0710 11:38:14.023848       1 adapter.go:94] successfully using in-cluster auth\nI0710 11:38:14.609714       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0710 11:38:14.609761       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0710 11:38:14.612618       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0710 11:38:14.614161       1 secure_serving.go:178] Serving securely on [::]:6443\nI0710 11:38:14.614719       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Jul 10 11:43:23.963 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-165-11.ec2.internal container/config-reloader container exited with code 2 (Error): 2020/07/10 11:38:26 Watching directory: "/etc/alertmanager/config"\n
Jul 10 11:43:23.963 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-165-11.ec2.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/07/10 11:38:27 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/10 11:38:27 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/10 11:38:27 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/10 11:38:27 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/10 11:38:27 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/10 11:38:27 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/10 11:38:27 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/10 11:38:27 http.go:107: HTTPS: listening on [::]:9095\nI0710 11:38:27.219453       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 10 11:43:37.185 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5765469575-t9dz8 node/ip-10-0-241-91.ec2.internal container/snapshot-controller container exited with code 2 (Error): 
Jul 10 11:43:39.456 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-150-236.ec2.internal container/prometheus container exited with code 1 (Error): hunks if any, this may take a while"\nlevel=info ts=2020-07-10T11:43:36.108Z caller=web.go:524 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-10T11:43:36.119Z caller=head.go:706 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-10T11:43:36.119Z caller=head.go:709 component=tsdb msg="WAL replay completed" duration=11.610983ms\nlevel=info ts=2020-07-10T11:43:36.122Z caller=main.go:694 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-10T11:43:36.122Z caller=main.go:695 msg="TSDB started"\nlevel=info ts=2020-07-10T11:43:36.122Z caller=main.go:799 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-10T11:43:36.122Z caller=main.go:547 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-10T11:43:36.122Z caller=main.go:561 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-10T11:43:36.123Z caller=main.go:583 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-10T11:43:36.123Z caller=main.go:557 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-10T11:43:36.123Z caller=main.go:543 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-10T11:43:36.123Z caller=manager.go:882 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-10T11:43:36.123Z caller=manager.go:892 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-10T11:43:36.123Z caller=main.go:577 msg="Scrape manager stopped"\nlevel=info ts=2020-07-10T11:43:36.131Z caller=notifier.go:601 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-10T11:43:36.131Z caller=main.go:749 msg="Notifier manager stopped"\nlevel=error ts=2020-07-10
Jul 10 11:43:44.579 E ns/openshift-cluster-machine-approver pod/machine-approver-75658f6ccd-g75rv node/ip-10-0-167-50.ec2.internal container/machine-approver-controller container exited with code 2 (Error): ] Starting reflector *v1beta1.CertificateSigningRequest (0s) from github.com/openshift/cluster-machine-approver/main.go:239\nI0710 11:38:25.798083       1 status.go:97] Starting cluster operator status controller\nI0710 11:38:25.798282       1 reflector.go:175] Starting reflector *v1.ClusterOperator (0s) from github.com/openshift/cluster-machine-approver/status.go:99\nI0710 11:38:25.893411       1 main.go:147] CSR csr-f9mll added\nI0710 11:38:25.893435       1 main.go:150] CSR csr-f9mll is already approved\nI0710 11:38:25.893450       1 main.go:147] CSR csr-kgmsp added\nI0710 11:38:25.893457       1 main.go:150] CSR csr-kgmsp is already approved\nI0710 11:38:25.893468       1 main.go:147] CSR csr-kqp6j added\nI0710 11:38:25.893475       1 main.go:150] CSR csr-kqp6j is already approved\nI0710 11:38:25.893484       1 main.go:147] CSR csr-nbs6q added\nI0710 11:38:25.893490       1 main.go:150] CSR csr-nbs6q is already approved\nI0710 11:38:25.893499       1 main.go:147] CSR csr-s5d57 added\nI0710 11:38:25.893506       1 main.go:150] CSR csr-s5d57 is already approved\nI0710 11:38:25.893514       1 main.go:147] CSR csr-tpcmn added\nI0710 11:38:25.893520       1 main.go:150] CSR csr-tpcmn is already approved\nI0710 11:38:25.893528       1 main.go:147] CSR csr-4wb6c added\nI0710 11:38:25.893534       1 main.go:150] CSR csr-4wb6c is already approved\nI0710 11:38:25.893543       1 main.go:147] CSR csr-9flgm added\nI0710 11:38:25.893548       1 main.go:150] CSR csr-9flgm is already approved\nI0710 11:38:25.893559       1 main.go:147] CSR csr-kj95x added\nI0710 11:38:25.893565       1 main.go:150] CSR csr-kj95x is already approved\nI0710 11:38:25.893575       1 main.go:147] CSR csr-rbzrd added\nI0710 11:38:25.893592       1 main.go:150] CSR csr-rbzrd is already approved\nI0710 11:38:25.893606       1 main.go:147] CSR csr-thw89 added\nI0710 11:38:25.893613       1 main.go:150] CSR csr-thw89 is already approved\nI0710 11:38:25.893623       1 main.go:147] CSR csr-x66xs added\nI0710 11:38:25.893671       1 main.go:150] CSR csr-x66xs is already approved\n
Jul 10 11:43:52.541 E ns/openshift-machine-config-operator pod/machine-config-operator-c8fb9c798-wghzr node/ip-10-0-167-50.ec2.internal container/machine-config-operator container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-machine-config-operator_machine-config-operator-c8fb9c798-wghzr_6b2c3b6e-dd2b-461b-9e29-984d42b54d5f/machine-config-operator/0.log": lstat /var/log/pods/openshift-machine-config-operator_machine-config-operator-c8fb9c798-wghzr_6b2c3b6e-dd2b-461b-9e29-984d42b54d5f/machine-config-operator/0.log: no such file or directory
Jul 10 11:44:14.106 E ns/openshift-cloud-credential-operator pod/pod-identity-webhook-6c478dbdd6-vvxln node/ip-10-0-167-50.ec2.internal container/pod-identity-webhook container exited with code 137 (Error): 
Jul 10 11:44:37.489 E kube-apiserver Kube API started failing: Get https://api.ci-op-5884d4q8-35670.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Jul 10 11:44:44.185 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusK8SFailed: Failed to rollout the stack. Error: running task Updating Prometheus-k8s failed: reconciling Prometheus rules PrometheusRule failed: updating PrometheusRule object failed: Internal error occurred: failed calling webhook "prometheusrules.openshift.io": Post https://prometheus-operator.openshift-monitoring.svc:8080/admission-prometheusrules/validate?timeout=5s: no endpoints available for service "prometheus-operator"
Jul 10 11:45:09.863 E ns/openshift-marketplace pod/redhat-operators-565654d6b4-np9lj node/ip-10-0-150-236.ec2.internal container/redhat-operators container exited with code 2 (Error):