ResultSUCCESS
Tests 2 failed / 27 succeeded
Started2020-07-30 16:24
Elapsed3h17m
Work namespaceci-op-yw8l6h6h
Refs master:0708acb1
311:7d36af84
pod15f84e28-d281-11ea-bc8d-0a580a83021c
repoopenshift/cluster-authentication-operator
revision1

Test Failures


Cluster upgrade [sig-api-machinery] Kubernetes APIs remain available 45m53s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-api\-machinery\]\sKubernetes\sAPIs\sremain\savailable$'
API "openshift-api-available" was unreachable during disruption for at least 2s of 45m52s (0%), this is currently sufficient to pass the test/job but not considered completely correct:

Jul 30 19:17:27.387 E kube-apiserver Kube API started failing: Get https://api.ci-op-yw8l6h6h-3ef74.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 54.177.66.153:6443: connect: connection refused
Jul 30 19:17:28.214 E kube-apiserver Kube API is not responding to GET requests
Jul 30 19:17:28.287 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1596137514.xml

Filter through log files


openshift-tests [sig-arch] Monitor cluster while tests execute 51m19s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\s\[sig\-arch\]\sMonitor\scluster\swhile\stests\sexecute$'
125 error level events were detected during this test run:

Jul 30 18:40:48.681 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-105.us-west-1.compute.internal node/ip-10-0-147-105.us-west-1.compute.internal container/setup init container exited with code 124 (Error): ................................................................................
Jul 30 18:44:13.159 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-195-252.us-west-1.compute.internal node/ip-10-0-195-252.us-west-1.compute.internal container/kube-scheduler container exited with code 255 (Error): , 0x7ba, 0x203000, 0x0, 0x7b5)\n	/usr/local/go/src/net/fd_unix.go:202 +0x4f\nnet.(*conn).Read(0xc000464a98, 0xc000368800, 0x7ba, 0x7ba, 0x0, 0x0, 0x0)\n	/usr/local/go/src/net/net.go:184 +0x68\ncrypto/tls.(*atLeastReader).Read(0xc0019c0020, 0xc000368800, 0x7ba, 0x7ba, 0x23, 0x1f8dae0, 0xc000c1ca78)\n	/usr/local/go/src/crypto/tls/conn.go:780 +0x60\nbytes.(*Buffer).ReadFrom(0xc001800cd8, 0x1f8d8e0, 0xc0019c0020, 0x40a345, 0x1a52800, 0x1c42d00)\n	/usr/local/go/src/bytes/buffer.go:204 +0xb4\ncrypto/tls.(*Conn).readFromUntil(0xc001800a80, 0x1f90200, 0xc000464a98, 0x5, 0xc000464a98, 0xd)\n	/usr/local/go/src/crypto/tls/conn.go:802 +0xec\ncrypto/tls.(*Conn).readRecordOrCCS(0xc001800a80, 0x0, 0x0, 0x0)\n	/usr/local/go/src/crypto/tls/conn.go:609 +0x124\ncrypto/tls.(*Conn).readRecord(...)\n	/usr/local/go/src/crypto/tls/conn.go:577\ncrypto/tls.(*Conn).Read(0xc001800a80, 0xc000d6c2d8, 0x9, 0x9, 0x0, 0x0, 0x0)\n	/usr/local/go/src/crypto/tls/conn.go:1255 +0x161\nio.ReadAtLeast(0x7f2d710401c0, 0xc001800a80, 0xc000d6c2d8, 0x9, 0x9, 0x9, 0x101df46137c01, 0x34ace931108, 0x1)\n	/usr/local/go/src/io/io.go:310 +0x87\nio.ReadFull(...)\n	/usr/local/go/src/io/io.go:329\nk8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000d6c2d8, 0x9, 0x9, 0x7f2d710401c0, 0xc001800a80, 0x0, 0xc000000000, 0xc000c1cf28, 0xc001ccb3e0)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x87\nk8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000d6c2a0, 0xc000c1cee0, 0x2, 0x0, 0x1)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa1\nk8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).readFrames(0xc001186480)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:745 +0xa4\ncreated by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).serve\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:850 +0x347\n
Jul 30 18:44:39.268 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-195-252.us-west-1.compute.internal node/ip-10-0-195-252.us-west-1.compute.internal container/setup init container exited with code 124 (Error): ................................................................................
Jul 30 18:44:48.306 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-195-252.us-west-1.compute.internal node/ip-10-0-195-252.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): e&resourceVersion=34951&timeout=5m8s&timeoutSeconds=308&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 18:44:46.697733       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.LimitRange: Get https://localhost:6443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=23010&timeout=7m56s&timeoutSeconds=476&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 18:44:46.698784       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Deployment: Get https://localhost:6443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=36672&timeout=7m51s&timeoutSeconds=471&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 18:44:46.706019       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=36686&timeout=8m6s&timeoutSeconds=486&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 18:44:46.707076       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=36681&timeout=5m27s&timeoutSeconds=327&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 18:44:46.708223       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Role: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=32948&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0730 18:44:47.288136       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0730 18:44:47.288199       1 policy_controller.go:94] leaderelection lost\nI0730 18:44:47.292198       1 resource_quota_controller.go:291] Shutting down resource quota controller\nI0730 18:44:47.294022       1 resource_quota_controller.go:260] resource quota controller worker shutting down\n
Jul 30 18:46:34.574 E ns/openshift-machine-api pod/machine-api-operator-7b4f9dd94f-d4g5m node/ip-10-0-161-20.us-west-1.compute.internal container/machine-api-operator container exited with code 2 (Error): 
Jul 30 18:46:47.178 E kube-apiserver Kube API started failing: Get https://api.ci-op-yw8l6h6h-3ef74.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 30 18:48:35.240 E clusteroperator/kube-apiserver changed Degraded to True: NodeInstaller_InstallerPodFailed: NodeInstallerDegraded: 1 nodes are failing on revision 8:\nNodeInstallerDegraded: 
Jul 30 18:49:20.360 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Jul 30 18:49:53.061 E ns/openshift-cluster-machine-approver pod/machine-approver-75856c699d-g9tlx node/ip-10-0-195-252.us-west-1.compute.internal container/machine-approver-controller container exited with code 2 (Error): uests.certificates.k8s.io)\nI0730 18:48:15.733882       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 18:48:15.733907       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0730 18:48:15.734643       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=39361&timeoutSeconds=484&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0730 18:48:15.734745       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=40650&timeoutSeconds=376&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0730 18:48:16.737847       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=39361&timeoutSeconds=588&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0730 18:48:16.738004       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=40650&timeoutSeconds=356&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0730 18:48:21.883331       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:99: Failed to watch *v1.ClusterOperator: the server is currently unable to handle the request (get clusteroperators.config.openshift.io)\n
Jul 30 18:49:54.672 E ns/openshift-kube-storage-version-migrator pod/migrator-86c78c9d97-pqxtj node/ip-10-0-179-241.us-west-1.compute.internal container/migrator container exited with code 2 (Error): I0730 18:35:13.513607       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0730 18:35:13.513732       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0730 18:35:13.513741       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0730 18:35:13.513752       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0730 18:35:13.513762       1 migrator.go:18] FLAG: --kubeconfig=""\nI0730 18:35:13.513770       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0730 18:35:13.513782       1 migrator.go:18] FLAG: --log_dir=""\nI0730 18:35:13.513790       1 migrator.go:18] FLAG: --log_file=""\nI0730 18:35:13.513798       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0730 18:35:13.513805       1 migrator.go:18] FLAG: --logtostderr="true"\nI0730 18:35:13.513812       1 migrator.go:18] FLAG: --skip_headers="false"\nI0730 18:35:13.513820       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0730 18:35:13.513827       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0730 18:35:13.513835       1 migrator.go:18] FLAG: --v="2"\nI0730 18:35:13.513842       1 migrator.go:18] FLAG: --vmodule=""\nI0730 18:35:13.515577       1 reflector.go:175] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\n
Jul 30 18:50:00.445 E ns/openshift-insights pod/insights-operator-d5d95b5b6-brwtx node/ip-10-0-161-20.us-west-1.compute.internal container/operator container exited with code 2 (Error): r.go:312] Found files to send: [/var/lib/insights-operator/insights-2020-07-30-184757.tar.gz]\nI0730 18:48:08.988875       1 insightsuploader.go:131] Uploading latest report since 2020-07-30T18:28:53Z\nI0730 18:48:08.998459       1 insightsclient.go:164] Uploading application/vnd.redhat.openshift.periodic to https://cloud.redhat.com/api/ingress/v1/upload\nI0730 18:48:09.715163       1 insightsclient.go:214] Successfully reported id=2020-07-30T18:48:08Z x-rh-insights-request-id=1a30a032569742bab864e9415f2736d9, wrote=68508\nI0730 18:48:09.715198       1 insightsuploader.go:159] Uploaded report successfully in 726.324224ms\nI0730 18:48:09.720815       1 status.go:320] The operator is healthy\nI0730 18:48:23.100595       1 httplog.go:90] GET /metrics: (6.749119ms) 200 [Prometheus/2.20.0 10.131.0.21:40170]\nI0730 18:48:29.336629       1 httplog.go:90] GET /metrics: (1.766083ms) 200 [Prometheus/2.20.0 10.128.2.6:34144]\nI0730 18:48:38.970825       1 status.go:320] The operator is healthy\nI0730 18:48:38.970879       1 status.go:430] No status update necessary, objects are identical\nI0730 18:48:38.984756       1 configobserver.go:68] Refreshing configuration from cluster pull secret\nI0730 18:48:38.989130       1 configobserver.go:93] Found cloud.openshift.com token\nI0730 18:48:38.989156       1 configobserver.go:110] Refreshing configuration from cluster secret\nI0730 18:48:53.099663       1 httplog.go:90] GET /metrics: (5.556519ms) 200 [Prometheus/2.20.0 10.131.0.21:40170]\nI0730 18:48:59.336318       1 httplog.go:90] GET /metrics: (1.587463ms) 200 [Prometheus/2.20.0 10.128.2.6:34144]\nI0730 18:49:23.099636       1 httplog.go:90] GET /metrics: (5.610471ms) 200 [Prometheus/2.20.0 10.131.0.21:40170]\nI0730 18:49:29.336221       1 httplog.go:90] GET /metrics: (1.507812ms) 200 [Prometheus/2.20.0 10.128.2.6:34144]\nI0730 18:49:53.099638       1 httplog.go:90] GET /metrics: (5.793353ms) 200 [Prometheus/2.20.0 10.131.0.21:40170]\nI0730 18:49:59.336277       1 httplog.go:90] GET /metrics: (1.5651ms) 200 [Prometheus/2.20.0 10.128.2.6:34144]\n
Jul 30 18:50:08.234 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-79c85c4c5b-v4t45 node/ip-10-0-195-252.us-west-1.compute.internal container/operator container exited with code 1 (Error): 0/tools/cache/reflector.go:125\nI0730 18:50:06.974412       1 base_controller.go:101] Shutting down ResourceSyncController ...\nI0730 18:50:06.974447       1 base_controller.go:58] Shutting down worker of ResourceSyncController controller ...\nI0730 18:50:06.977972       1 base_controller.go:48] All ResourceSyncController workers have been terminated\nI0730 18:50:06.974707       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0730 18:50:06.974713       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0730 18:50:06.974868       1 secure_serving.go:222] Stopped listening on [::]:8443\nI0730 18:50:06.974879       1 base_controller.go:101] Shutting down UserCAObservationController ...\nI0730 18:50:06.974889       1 base_controller.go:101] Shutting down StatusSyncer_openshift-controller-manager ...\nI0730 18:50:06.974908       1 base_controller.go:58] Shutting down worker of UserCAObservationController controller ...\nI0730 18:50:06.978365       1 base_controller.go:48] All UserCAObservationController workers have been terminated\nI0730 18:50:06.974919       1 base_controller.go:58] Shutting down worker of StatusSyncer_openshift-controller-manager controller ...\nI0730 18:50:06.978412       1 base_controller.go:48] All StatusSyncer_openshift-controller-manager workers have been terminated\nI0730 18:50:06.974934       1 base_controller.go:58] Shutting down worker of ConfigObserver controller ...\nI0730 18:50:06.978453       1 base_controller.go:48] All ConfigObserver workers have been terminated\nI0730 18:50:06.974956       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController\nI0730 18:50:06.974971       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nW0730 18:50:06.975088       1 builder.go:88] graceful termination failed, controllers failed with error: stopped\n
Jul 30 18:50:09.670 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Could not update deployment "openshift-authentication-operator/authentication-operator" (205 of 606)\n* Could not update deployment "openshift-cluster-storage-operator/csi-snapshot-controller-operator" (245 of 606)\n* Could not update deployment "openshift-console/downloads" (396 of 606)\n* Could not update deployment "openshift-controller-manager-operator/openshift-controller-manager-operator" (309 of 606)\n* Could not update deployment "openshift-image-registry/cluster-image-registry-operator" (262 of 606)\n* Could not update deployment "openshift-marketplace/marketplace-operator" (458 of 606)\n* Could not update deployment "openshift-monitoring/cluster-monitoring-operator" (365 of 606)\n* Could not update deployment "openshift-operator-lifecycle-manager/olm-operator" (438 of 606)\n* Could not update deployment "openshift-service-ca-operator/service-ca-operator" (471 of 606)
Jul 30 18:50:09.734 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-774ddd4558-sn547 node/ip-10-0-179-241.us-west-1.compute.internal container/operator container exited with code 1 (Error): 8] Finished syncing operator at 33.535762ms\nI0730 18:49:58.146199       1 operator.go:146] Starting syncing operator at 2020-07-30 18:49:58.146192222 +0000 UTC m=+884.542435351\nI0730 18:49:58.484555       1 operator.go:148] Finished syncing operator at 338.353916ms\nI0730 18:50:00.466629       1 operator.go:146] Starting syncing operator at 2020-07-30 18:50:00.466613883 +0000 UTC m=+886.862856875\nI0730 18:50:00.493087       1 operator.go:148] Finished syncing operator at 26.464368ms\nI0730 18:50:00.495314       1 operator.go:146] Starting syncing operator at 2020-07-30 18:50:00.495304626 +0000 UTC m=+886.891547890\nI0730 18:50:00.529531       1 operator.go:148] Finished syncing operator at 34.219365ms\nI0730 18:50:08.770174       1 operator.go:146] Starting syncing operator at 2020-07-30 18:50:08.770159977 +0000 UTC m=+895.166403347\nI0730 18:50:08.794146       1 cmd.go:88] Received SIGTERM or SIGINT signal, shutting down controller.\nI0730 18:50:08.794279       1 dynamic_serving_content.go:145] Shutting down serving-cert::/tmp/serving-cert-495506434/tls.crt::/tmp/serving-cert-495506434/tls.key\nI0730 18:50:08.794373       1 base_controller.go:136] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0730 18:50:08.794371       1 reflector.go:181] Stopping reflector *v1.Deployment (34m6.409758329s) from k8s.io/client-go/informers/factory.go:135\nI0730 18:50:08.794395       1 base_controller.go:136] Shutting down ManagementStateController ...\nI0730 18:50:08.794410       1 base_controller.go:136] Shutting down LoggingSyncer ...\nI0730 18:50:08.794433       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (20m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0730 18:50:08.794487       1 reflector.go:181] Stopping reflector *v1beta1.CustomResourceDefinition (24m43.984670228s) from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117\nW0730 18:50:08.794502       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Jul 30 18:50:12.740 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-78cf98fccd-5xhqc node/ip-10-0-179-241.us-west-1.compute.internal container/aws-ebs-csi-driver-operator container exited with code 1 (Error): 
Jul 30 18:50:24.521 E ns/openshift-monitoring pod/node-exporter-59fr2 node/ip-10-0-195-252.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-07-30T18:29:16.381Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-07-30T18:29:16.381Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Jul 30 18:50:30.832 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-179-241.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): 2020/07/30 18:35:34 Watching directory: "/etc/alertmanager/config"\n
Jul 30 18:50:30.832 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-179-241.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): r.go:328: unable to retrieve authentication information for tokens: Unauthorized\n2020/07/30 18:35:45 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:47 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:49 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:51 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:53 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:55 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:55 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:55 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/30 18:35:55 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 18:35:55 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 18:35:55 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 18:35:55 http.go:107: HTTPS: listening on [::]:9095\nI0730 18:35:55.578152       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nE0730 18:36:00.185263       1 webhook.go:109] Failed to make webhook authenticator request: Unauthorized\n2020/07/30 18:36:00 oauthproxy.go:790: requestauth: 10.131.0.21:36238 Unauthorized\nE0730 18:37:30.190704       1 webhook.go:109] Failed to make webhook authenticator request: Unauthorized\n2020/07/30 18:37:30 oauthproxy.go:790: requestauth: 10.131.0.21:37812 Unauthorized\nE0730 18:39:00.184086       1 webhook.go:109] Failed to make webhook authenticator request: Unauthorized\n2020/07/30 18:39:00 oauthproxy.go:790: requestauth: 10.131.0.21:39338 Unauthorized\n
Jul 30 18:50:32.840 E ns/openshift-monitoring pod/openshift-state-metrics-b554fcb9d-29slh node/ip-10-0-179-241.us-west-1.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
Jul 30 18:50:39.902 E ns/openshift-monitoring pod/kube-state-metrics-6b76c84846-p59s5 node/ip-10-0-179-241.us-west-1.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Jul 30 18:50:41.398 E ns/openshift-controller-manager pod/controller-manager-qz4fj node/ip-10-0-147-105.us-west-1.compute.internal container/controller-manager container exited with code 137 (Error): erver ("unable to decode an event from the watch stream: stream error: stream ID 183; INTERNAL_ERROR") has prevented the request from succeeding\nW0730 18:47:57.402378       1 reflector.go:402] runtime/asm_amd64.s:1357: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 229; INTERNAL_ERROR") has prevented the request from succeeding\nW0730 18:47:57.402647       1 reflector.go:402] runtime/asm_amd64.s:1357: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 187; INTERNAL_ERROR") has prevented the request from succeeding\nW0730 18:47:57.402917       1 reflector.go:402] runtime/asm_amd64.s:1357: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 111; INTERNAL_ERROR") has prevented the request from succeeding\nW0730 18:48:47.947430       1 reflector.go:402] runtime/asm_amd64.s:1357: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 255; INTERNAL_ERROR") has prevented the request from succeeding\nW0730 18:48:47.947930       1 reflector.go:402] runtime/asm_amd64.s:1357: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 109; INTERNAL_ERROR") has prevented the request from succeeding\nW0730 18:48:47.948070       1 reflector.go:402] runtime/asm_amd64.s:1357: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 181; INTERNAL_ERROR") has prevented the request from succeeding\nW0730 18:48:47.948118       1 reflector.go:402] runtime/asm_amd64.s:1357: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 259; INTERNAL_ERROR") has prevented the request from succeeding\n
Jul 30 18:50:41.636 E ns/openshift-controller-manager pod/controller-manager-zfcxs node/ip-10-0-195-252.us-west-1.compute.internal container/controller-manager container exited with code 137 (Error): I0730 18:29:58.657950       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (554623c)\nI0730 18:29:58.660447       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-yw8l6h6h/stable-initial@sha256:65987aff5ea57cc3eaa0dcecd232ba0621a63e654ab901cf587d50c43c584879"\nI0730 18:29:58.660467       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-yw8l6h6h/stable-initial@sha256:12bb1c593c24352bb58d5d6581af075f21f07ad36129d0ae4855ffdb002ba707"\nI0730 18:29:58.660561       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0730 18:29:58.660919       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Jul 30 18:50:45.591 E ns/openshift-monitoring pod/node-exporter-pm9s8 node/ip-10-0-161-20.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-07-30T18:29:10.440Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-07-30T18:29:10.440Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Jul 30 18:50:58.701 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-161-20.us-west-1.compute.internal node/ip-10-0-161-20.us-west-1.compute.internal container/setup init container exited with code 124 (Error): ................................................................................
Jul 30 18:51:06.286 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-8bbf67bd8-cn8rd node/ip-10-0-199-86.us-west-1.compute.internal container/csi-driver container exited with code 2 (Error): 
Jul 30 18:51:06.286 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-8bbf67bd8-cn8rd node/ip-10-0-199-86.us-west-1.compute.internal container/csi-snapshotter container exited with code 2 (Error): 
Jul 30 18:51:06.286 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-8bbf67bd8-cn8rd node/ip-10-0-199-86.us-west-1.compute.internal container/csi-attacher container exited with code 2 (Error): 
Jul 30 18:51:06.286 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-8bbf67bd8-cn8rd node/ip-10-0-199-86.us-west-1.compute.internal container/csi-resizer container exited with code 2 (Error): 
Jul 30 18:51:09.504 E ns/openshift-cloud-credential-operator pod/pod-identity-webhook-699666bc4-glfh5 node/ip-10-0-147-105.us-west-1.compute.internal container/pod-identity-webhook container exited with code 137 (Error): 
Jul 30 18:51:13.453 E ns/openshift-monitoring pod/grafana-5b5ddd784f-2ktv4 node/ip-10-0-199-86.us-west-1.compute.internal container/grafana-proxy container exited with code 2 (Error): 
Jul 30 18:51:17.560 E ns/openshift-monitoring pod/prometheus-adapter-74849bc4b4-lhtcj node/ip-10-0-199-86.us-west-1.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0730 18:36:31.872743       1 adapter.go:94] successfully using in-cluster auth\nI0730 18:36:33.127126       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0730 18:36:33.127126       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0730 18:36:33.128013       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0730 18:36:33.128231       1 secure_serving.go:178] Serving securely on [::]:6443\nI0730 18:36:33.128315       1 tlsconfig.go:219] Starting DynamicServingCertificateController\n
Jul 30 18:51:21.094 E ns/openshift-service-ca pod/service-ca-58c7f8d964-7h4qk node/ip-10-0-195-252.us-west-1.compute.internal container/service-ca-controller container exited with code 1 (Error): 
Jul 30 18:51:21.414 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-140-57.us-west-1.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-07-30T18:51:09.438Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Jul 30 18:51:21.571 E ns/openshift-console-operator pod/console-operator-6c58d7cb7c-v7c46 node/ip-10-0-147-105.us-west-1.compute.internal container/console-operator container exited with code 1 (Error): roller.go:349] shutting down ConsoleRouteSyncController\nI0730 18:51:20.671402       1 controller.go:181] shutting down ConsoleServiceSyncController\nI0730 18:51:20.671414       1 base_controller.go:136] Shutting down UnsupportedConfigOverridesController ...\nI0730 18:51:20.671426       1 base_controller.go:136] Shutting down ManagementStateController ...\nI0730 18:51:20.671434       1 controller.go:70] Shutting down Console\nI0730 18:51:20.671443       1 base_controller.go:136] Shutting down StatusSyncer_console ...\nI0730 18:51:20.671453       1 base_controller.go:136] Shutting down ResourceSyncController ...\nI0730 18:51:20.671462       1 base_controller.go:136] Shutting down LoggingSyncer ...\nI0730 18:51:20.671469       1 controller.go:115] shutting down ConsoleResourceSyncDestinationController\nI0730 18:51:20.671494       1 base_controller.go:83] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0730 18:51:20.675588       1 base_controller.go:73] All UnsupportedConfigOverridesController workers have been terminated\nI0730 18:51:20.671505       1 base_controller.go:83] Shutting down worker of ManagementStateController controller ...\nI0730 18:51:20.675640       1 base_controller.go:73] All ManagementStateController workers have been terminated\nI0730 18:51:20.671515       1 base_controller.go:83] Shutting down worker of StatusSyncer_console controller ...\nI0730 18:51:20.675683       1 base_controller.go:73] All StatusSyncer_console workers have been terminated\nI0730 18:51:20.671524       1 base_controller.go:83] Shutting down worker of ResourceSyncController controller ...\nI0730 18:51:20.675723       1 base_controller.go:73] All ResourceSyncController workers have been terminated\nI0730 18:51:20.671534       1 base_controller.go:83] Shutting down worker of LoggingSyncer controller ...\nI0730 18:51:20.675766       1 base_controller.go:73] All LoggingSyncer workers have been terminated\nW0730 18:51:20.671550       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Jul 30 18:51:25.095 E ns/openshift-monitoring pod/thanos-querier-7cd775c8cf-qv22l node/ip-10-0-179-241.us-west-1.compute.internal container/oauth-proxy container exited with code 2 (Error): hproxy.go:782: basicauth: 10.128.0.14:39252 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:42:57 oauthproxy.go:782: basicauth: 10.128.0.14:44578 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:42:57 oauthproxy.go:782: basicauth: 10.128.0.14:44578 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:43:51 oauthproxy.go:782: basicauth: 10.128.0.14:46212 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:43:51 oauthproxy.go:782: basicauth: 10.128.0.14:46212 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:44:41 oauthproxy.go:782: basicauth: 10.130.0.37:44590 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:46:41 oauthproxy.go:782: basicauth: 10.130.0.37:57576 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:46:41 oauthproxy.go:782: basicauth: 10.130.0.37:57576 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:48:41 oauthproxy.go:782: basicauth: 10.130.0.37:38460 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:48:41 oauthproxy.go:782: basicauth: 10.130.0.37:38460 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:49:41 oauthproxy.go:782: basicauth: 10.130.0.37:40992 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:49:41 oauthproxy.go:782: basicauth: 10.130.0.37:40992 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:50:41 oauthproxy.go:782: basicauth: 10.130.0.37:44548 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:50:41 oauthproxy.go:782: basicauth: 10.130.0.37:44548 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 30 18:51:25.538 E ns/openshift-monitoring pod/node-exporter-7brlz node/ip-10-0-199-86.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): :112 collector=mountstats\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=netclass\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=netdev\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=netstat\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=nfs\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=nfsd\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=powersupplyclass\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=pressure\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=rapl\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=schedstat\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=sockstat\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=softnet\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=stat\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=textfile\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=thermal_zone\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=time\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=timex\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=udp_queues\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=uname\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=vmstat\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=xfs\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:112 collector=zfs\nlevel=info ts=2020-07-30T18:34:36.285Z caller=node_exporter.go:191 msg="Listening on" address=127.0.0.1:9100\nlevel=info ts=2020-07-30T18:34:36.285Z caller=tls_config.go:170 msg="TLS is disabled and it cannot be enabled on the fly." http2=false\n
Jul 30 18:51:28.649 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-86.us-west-1.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/07/30 18:35:41 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/07/30 18:35:42 config map updated\n2020/07/30 18:35:42 error: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused\n2020/07/30 18:37:02 config map updated\n2020/07/30 18:37:02 successfully triggered reload\n2020/07/30 18:40:42 config map updated\n2020/07/30 18:40:42 successfully triggered reload\n
Jul 30 18:51:28.649 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-86.us-west-1.compute.internal container/prometheus-proxy container exited with code 2 (Error): k.go:109] Failed to make webhook authenticator request: Unauthorized\n2020/07/30 18:36:44 oauthproxy.go:790: requestauth: 10.131.0.14:39100 Unauthorized\n2020/07/30 18:37:44 oauthproxy.go:782: basicauth: 10.131.0.14:40070 Authorization header does not start with 'Basic', skipping basic authentication\nE0730 18:37:44.335985       1 webhook.go:109] Failed to make webhook authenticator request: Unauthorized\n2020/07/30 18:37:44 oauthproxy.go:790: requestauth: 10.131.0.14:40070 Unauthorized\n2020/07/30 18:38:44 oauthproxy.go:782: basicauth: 10.131.0.14:41064 Authorization header does not start with 'Basic', skipping basic authentication\nE0730 18:38:44.361883       1 webhook.go:109] Failed to make webhook authenticator request: Unauthorized\n2020/07/30 18:38:44 oauthproxy.go:790: requestauth: 10.131.0.14:41064 Unauthorized\n2020/07/30 18:39:44 oauthproxy.go:782: basicauth: 10.131.0.14:42120 Authorization header does not start with 'Basic', skipping basic authentication\nE0730 18:39:44.386642       1 webhook.go:109] Failed to make webhook authenticator request: Unauthorized\n2020/07/30 18:39:44 oauthproxy.go:790: requestauth: 10.131.0.14:42120 Unauthorized\n2020/07/30 18:40:44 oauthproxy.go:782: basicauth: 10.131.0.14:43196 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:45:15 oauthproxy.go:782: basicauth: 10.131.0.14:49144 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:47:57 oauthproxy.go:782: basicauth: 10.129.0.22:60528 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/30 18:49:45 oauthproxy.go:782: basicauth: 10.131.0.14:54382 Authorization header does not start with 'Basic', skipping basic authentication\n202
Jul 30 18:51:28.649 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-86.us-west-1.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-30T18:35:36.496893216Z caller=main.go:87 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-07-30T18:35:36.498261142Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-07-30T18:35:41.498777369Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-30T18:35:46.673685959Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-07-30T18:35:46.673782437Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-07-30T18:35:46.812776757Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-07-30T18:38:46.797882283Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-07-30T18:41:47.019049007Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 30 18:51:30.139 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-179-241.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): 2020/07/30 18:35:34 Watching directory: "/etc/alertmanager/config"\n
Jul 30 18:51:30.139 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-179-241.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): r.go:328: unable to retrieve authentication information for tokens: Unauthorized\n2020/07/30 18:35:45 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:47 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:49 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:51 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:53 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:55 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:55 provider.go:352: unable to retrieve authorization information for users: Unauthorized\n2020/07/30 18:35:55 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/30 18:35:55 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 18:35:55 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 18:35:55 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 18:35:55 http.go:107: HTTPS: listening on [::]:9095\nI0730 18:35:55.609226       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nE0730 18:36:00.186342       1 webhook.go:109] Failed to make webhook authenticator request: Unauthorized\n2020/07/30 18:36:00 oauthproxy.go:790: requestauth: 10.131.0.21:52912 Unauthorized\nE0730 18:37:30.190712       1 webhook.go:109] Failed to make webhook authenticator request: Unauthorized\n2020/07/30 18:37:30 oauthproxy.go:790: requestauth: 10.131.0.21:54484 Unauthorized\nE0730 18:39:00.184346       1 webhook.go:109] Failed to make webhook authenticator request: Unauthorized\n2020/07/30 18:39:00 oauthproxy.go:790: requestauth: 10.131.0.21:56010 Unauthorized\n
Jul 30 18:51:43.233 E ns/openshift-marketplace pod/redhat-operators-6bf64c5b57-bzdgt node/ip-10-0-179-241.us-west-1.compute.internal container/redhat-operators container exited with code 2 (Error): 
Jul 30 18:51:45.242 E ns/openshift-marketplace pod/redhat-marketplace-575679746f-nxvd2 node/ip-10-0-179-241.us-west-1.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
Jul 30 18:51:45.984 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-86.us-west-1.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-07-30T18:51:37.472Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Jul 30 18:52:15.386 E ns/openshift-marketplace pod/certified-operators-b79d9bd67-d5658 node/ip-10-0-179-241.us-west-1.compute.internal container/certified-operators container exited with code 2 (Error): 
Jul 30 18:52:23.030 E ns/openshift-console pod/console-859d6769dc-ztmjr node/ip-10-0-161-20.us-west-1.compute.internal container/console container exited with code 2 (Error): 2020-07-30T18:38:25Z cmd/main: Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\n2020-07-30T18:38:25Z cmd/main: cookies are secure!\n2020-07-30T18:38:25Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-07-30T18:38:35Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-07-30T18:38:45Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-07-30T18:38:55Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-07-30T18:39:05Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-07-30T18:39:15Z cmd/main: Binding to [::]:8443...\n2020-07-30T18:39:15Z cmd/main: using TLS\n
Jul 30 18:52:46.684 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusK8SFailed: Failed to rollout the stack. Error: running task Updating Prometheus-k8s failed: reconciling Prometheus rules PrometheusRule failed: updating PrometheusRule object failed: Internal error occurred: failed calling webhook "prometheusrules.openshift.io": Post https://prometheus-operator.openshift-monitoring.svc:8080/admission-prometheusrules/validate?timeout=5s: x509: certificate signed by unknown authority
Jul 30 18:53:23.218 E ns/openshift-sdn pod/sdn-controller-kqv2v node/ip-10-0-161-20.us-west-1.compute.internal container/sdn-controller container exited with code 2 (Error): I0730 18:24:35.227092       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0730 18:27:55.801082       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-yw8l6h6h-3ef74.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Jul 30 18:53:36.873 E ns/openshift-sdn pod/sdn-controller-qf9dk node/ip-10-0-195-252.us-west-1.compute.internal container/sdn-controller container exited with code 2 (Error): .go:150] Created HostSubnet ip-10-0-199-86.us-west-1.compute.internal (host: "ip-10-0-199-86.us-west-1.compute.internal", ip: "10.0.199.86", subnet: "10.128.2.0/23")\nI0730 18:34:11.024240       1 subnets.go:150] Created HostSubnet ip-10-0-140-57.us-west-1.compute.internal (host: "ip-10-0-140-57.us-west-1.compute.internal", ip: "10.0.140.57", subnet: "10.129.2.0/23")\nI0730 18:40:38.499261       1 vnids.go:116] Allocated netid 6823568 for namespace "e2e-frontend-ingress-available-8506"\nI0730 18:40:38.511736       1 vnids.go:116] Allocated netid 8012472 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-8999"\nI0730 18:40:38.525873       1 vnids.go:116] Allocated netid 9713835 for namespace "e2e-k8s-sig-apps-deployment-upgrade-9777"\nI0730 18:40:38.550927       1 vnids.go:116] Allocated netid 6747414 for namespace "e2e-k8s-service-lb-available-4355"\nI0730 18:40:38.567819       1 vnids.go:116] Allocated netid 1752680 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-5157"\nI0730 18:40:38.610328       1 vnids.go:116] Allocated netid 9095079 for namespace "e2e-openshift-api-available-5108"\nI0730 18:40:38.637975       1 vnids.go:116] Allocated netid 15148091 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-126"\nI0730 18:40:38.665165       1 vnids.go:116] Allocated netid 14739028 for namespace "e2e-k8s-sig-apps-job-upgrade-1750"\nI0730 18:40:38.693139       1 vnids.go:116] Allocated netid 9303484 for namespace "e2e-kubernetes-api-available-1516"\nI0730 18:40:38.709545       1 vnids.go:116] Allocated netid 15930517 for namespace "e2e-check-for-critical-alerts-21"\nI0730 18:40:38.727786       1 vnids.go:116] Allocated netid 13602320 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-9173"\nE0730 18:50:19.632600       1 leaderelection.go:356] Failed to update lock: Put https://api-int.ci-op-yw8l6h6h-3ef74.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: read tcp 10.0.195.252:36768->10.0.181.18:6443: read: connection reset by peer\n
Jul 30 18:53:52.112 E ns/openshift-sdn pod/sdn-controller-hfts4 node/ip-10-0-147-105.us-west-1.compute.internal container/sdn-controller container exited with code 2 (Error): I0730 18:24:40.802351       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0730 18:25:08.826554       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\nE0730 18:27:55.802090       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-yw8l6h6h-3ef74.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Jul 30 18:53:52.924 E ns/openshift-multus pod/multus-admission-controller-9mn6r node/ip-10-0-195-252.us-west-1.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Jul 30 18:53:53.594 E ns/openshift-sdn pod/ovs-q2tzc node/ip-10-0-179-241.us-west-1.compute.internal container/openvswitch container exited with code 137 (Error): 175|bridge|INFO|bridge br0: added interface vetha0cf0022 on port 26\n2020-07-30T18:51:57.152Z|00176|connmgr|INFO|br0<->unix#430: 5 flow_mods in the last 0 s (5 adds)\n2020-07-30T18:51:57.189Z|00177|connmgr|INFO|br0<->unix#433: 2 flow_mods in the last 0 s (2 deletes)\n\n==> /host/var/log/openvswitch/ovsdb-server.log <==\n2020-07-30T18:52:07.923Z|00056|jsonrpc|WARN|unix#510: receive error: Connection reset by peer\n2020-07-30T18:52:07.923Z|00057|reconnect|WARN|unix#510: connection dropped (Connection reset by peer)\n\n==> /host/var/log/openvswitch/ovs-vswitchd.log <==\n2020-07-30T18:52:13.059Z|00178|connmgr|INFO|br0<->unix#440: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-30T18:52:13.088Z|00179|connmgr|INFO|br0<->unix#443: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-30T18:52:13.112Z|00180|bridge|INFO|bridge br0: deleted interface veth84fbc126 on port 13\n2020-07-30T18:52:14.830Z|00181|connmgr|INFO|br0<->unix#446: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-30T18:52:14.860Z|00182|connmgr|INFO|br0<->unix#449: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-30T18:52:14.883Z|00183|bridge|INFO|bridge br0: deleted interface vethc46eed12 on port 12\n\n==> /host/var/log/openvswitch/ovsdb-server.log <==\n2020-07-30T18:53:07.985Z|00058|jsonrpc|WARN|unix#536: receive error: Connection reset by peer\n2020-07-30T18:53:07.986Z|00059|reconnect|WARN|unix#536: connection dropped (Connection reset by peer)\n\n==> /host/var/log/openvswitch/ovs-vswitchd.log <==\n2020-07-30T18:53:22.291Z|00184|connmgr|INFO|br0<->unix#458: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-30T18:53:22.319Z|00185|connmgr|INFO|br0<->unix#461: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-30T18:53:22.342Z|00186|bridge|INFO|bridge br0: deleted interface veth671415f2 on port 14\n2020-07-30T18:53:24.944Z|00187|bridge|INFO|bridge br0: added interface veth707f95be on port 27\n2020-07-30T18:53:24.975Z|00188|connmgr|INFO|br0<->unix#464: 5 flow_mods in the last 0 s (5 adds)\n2020-07-30T18:53:25.015Z|00189|connmgr|INFO|br0<->unix#467: 2 flow_mods in the last 0 s (2 deletes)\n
Jul 30 18:54:19.246 E kube-apiserver Kube API started failing: Get https://api.ci-op-yw8l6h6h-3ef74.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Jul 30 18:54:34.322 E ns/openshift-multus pod/multus-admission-controller-j5jvd node/ip-10-0-147-105.us-west-1.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
Jul 30 18:54:47.503 E ns/openshift-sdn pod/ovs-gllz6 node/ip-10-0-161-20.us-west-1.compute.internal container/openvswitch container exited with code 137 (Error): r|INFO|br0<->unix#980: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.065Z|00402|connmgr|INFO|br0<->unix#984: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.090Z|00403|connmgr|INFO|br0<->unix#987: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.112Z|00404|connmgr|INFO|br0<->unix#990: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.132Z|00405|connmgr|INFO|br0<->unix#993: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.160Z|00406|connmgr|INFO|br0<->unix#996: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.203Z|00407|connmgr|INFO|br0<->unix#1000: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.218Z|00408|connmgr|INFO|br0<->unix#1003: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:53:36.256Z|00409|connmgr|INFO|br0<->unix#1006: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.278Z|00410|connmgr|INFO|br0<->unix#1009: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:53:36.303Z|00411|connmgr|INFO|br0<->unix#1012: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.338Z|00412|connmgr|INFO|br0<->unix#1015: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:53:36.358Z|00413|connmgr|INFO|br0<->unix#1018: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.378Z|00414|connmgr|INFO|br0<->unix#1021: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:53:36.394Z|00415|connmgr|INFO|br0<->unix#1024: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:53:36.404Z|00416|connmgr|INFO|br0<->unix#1026: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:53:36.426Z|00417|connmgr|INFO|br0<->unix#1029: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:53:36.453Z|00418|connmgr|INFO|br0<->unix#1032: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:53:36.473Z|00419|connmgr|INFO|br0<->unix#1035: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:53:36.494Z|00420|connmgr|INFO|br0<->unix#1038: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:53:36.523Z|00421|connmgr|INFO|br0<->unix#1041: 1 flow_mods in the last 0 s (1 adds)\n
Jul 30 18:54:55.217 E ns/openshift-multus pod/multus-tcxdk node/ip-10-0-195-252.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Jul 30 18:54:55.339 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-105.us-west-1.compute.internal node/ip-10-0-147-105.us-west-1.compute.internal container/setup init container exited with code 124 (Error): ................................................................................
Jul 30 18:54:55.377 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-105.us-west-1.compute.internal node/ip-10-0-147-105.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): e&resourceVersion=42272&timeout=7m46s&timeoutSeconds=466&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 18:54:54.369564       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Namespace: Get https://localhost:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=39798&timeout=8m41s&timeoutSeconds=521&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 18:54:54.370604       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=51184&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 18:54:54.371691       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=46872&timeout=8m10s&timeoutSeconds=490&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 18:54:54.372719       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=49718&timeout=5m18s&timeoutSeconds=318&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0730 18:54:54.374057       1 reflector.go:382] runtime/asm_amd64.s:1357: Failed to watch *v1.ResourceQuota: Get https://localhost:6443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=35019&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0730 18:54:54.448581       1 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-147-105 stopped leading\nI0730 18:54:54.448925       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0730 18:54:54.448965       1 policy_controller.go:94] leaderelection lost\n
Jul 30 18:55:37.400 E ns/openshift-sdn pod/ovs-x2t27 node/ip-10-0-195-252.us-west-1.compute.internal container/openvswitch container exited with code 137 (Error): |br0<->unix#1477: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:54:47.533Z|00623|connmgr|INFO|br0<->unix#1480: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:54:47.557Z|00624|connmgr|INFO|br0<->unix#1483: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:54:47.576Z|00625|connmgr|INFO|br0<->unix#1486: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:54:47.597Z|00626|connmgr|INFO|br0<->unix#1489: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:54:47.614Z|00627|connmgr|INFO|br0<->unix#1492: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:54:47.629Z|00628|connmgr|INFO|br0<->unix#1495: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:54:47.646Z|00629|connmgr|INFO|br0<->unix#1498: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:54:47.665Z|00630|connmgr|INFO|br0<->unix#1501: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:54:47.682Z|00631|connmgr|INFO|br0<->unix#1504: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:54:47.698Z|00632|connmgr|INFO|br0<->unix#1507: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:54:47.710Z|00633|connmgr|INFO|br0<->unix#1509: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:54:47.739Z|00634|connmgr|INFO|br0<->unix#1513: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:54:47.756Z|00635|connmgr|INFO|br0<->unix#1516: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:54:47.771Z|00636|connmgr|INFO|br0<->unix#1519: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:54:47.783Z|00637|connmgr|INFO|br0<->unix#1522: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:54:47.799Z|00638|connmgr|INFO|br0<->unix#1525: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:54:47.811Z|00639|connmgr|INFO|br0<->unix#1528: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:54:47.824Z|00640|connmgr|INFO|br0<->unix#1531: 1 flow_mods in the last 0 s (1 deletes)\n2020-07-30T18:54:47.842Z|00641|connmgr|INFO|br0<->unix#1534: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:54:47.849Z|00642|connmgr|INFO|br0<->unix#1536: 1 flow_mods in the last 0 s (1 deletes)\n
Jul 30 18:56:02.895 E ns/openshift-multus pod/multus-zrf5x node/ip-10-0-199-86.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Jul 30 18:57:41.090 E ns/openshift-multus pod/multus-768b6 node/ip-10-0-161-20.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Jul 30 18:58:25.110 E ns/openshift-sdn pod/ovs-72nqj node/ip-10-0-147-105.us-west-1.compute.internal container/openvswitch container exited with code 137 (Error): <->unix#978: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:54:16.084Z|00388|connmgr|INFO|br0<->unix#981: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:54:16.127Z|00389|connmgr|INFO|br0<->unix#984: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:54:16.156Z|00390|connmgr|INFO|br0<->unix#987: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:54:16.187Z|00391|connmgr|INFO|br0<->unix#990: 3 flow_mods in the last 0 s (3 adds)\n2020-07-30T18:54:16.220Z|00392|connmgr|INFO|br0<->unix#993: 1 flow_mods in the last 0 s (1 adds)\n2020-07-30T18:54:33.414Z|00393|connmgr|INFO|br0<->unix#996: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-30T18:54:33.440Z|00394|connmgr|INFO|br0<->unix#999: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-30T18:54:33.461Z|00395|bridge|INFO|bridge br0: deleted interface veth75043efa on port 4\n2020-07-30T18:54:35.740Z|00396|bridge|INFO|bridge br0: added interface veth3afdf67b on port 62\n2020-07-30T18:54:35.771Z|00397|connmgr|INFO|br0<->unix#1002: 5 flow_mods in the last 0 s (5 adds)\n2020-07-30T18:54:35.807Z|00398|connmgr|INFO|br0<->unix#1005: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-30T18:56:37.034Z|00399|bridge|INFO|bridge br0: added interface vethfd1bf375 on port 63\n2020-07-30T18:56:37.089Z|00400|connmgr|INFO|br0<->unix#1021: 5 flow_mods in the last 0 s (5 adds)\n2020-07-30T18:56:37.161Z|00401|connmgr|INFO|br0<->unix#1025: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-30T18:56:37.166Z|00402|connmgr|INFO|br0<->unix#1027: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-07-30T18:56:41.792Z|00403|connmgr|INFO|br0<->unix#1030: 2 flow_mods in the last 0 s (2 deletes)\n2020-07-30T18:56:41.821Z|00404|connmgr|INFO|br0<->unix#1033: 4 flow_mods in the last 0 s (4 deletes)\n2020-07-30T18:56:41.849Z|00405|bridge|INFO|bridge br0: deleted interface vethfd1bf375 on port 63\n\n==> /host/var/log/openvswitch/ovsdb-server.log <==\n2020-07-30T18:57:52.129Z|00107|jsonrpc|WARN|unix#1067: send error: Broken pipe\n2020-07-30T18:57:52.129Z|00108|reconnect|WARN|unix#1067: connection dropped (Broken pipe)\n
Jul 30 18:59:47.379 E ns/openshift-multus pod/multus-kcfxw node/ip-10-0-179-241.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Jul 30 19:01:29.168 E ns/openshift-multus pod/multus-2sfc6 node/ip-10-0-140-57.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
Jul 30 19:02:54.665 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-dns-operator/dns-operator" (489 of 606)
Jul 30 19:05:28.321 E ns/openshift-machine-config-operator pod/machine-config-operator-58cf978474-fgt4x node/ip-10-0-195-252.us-west-1.compute.internal container/machine-config-operator container exited with code 2 (Error): 4932 +0000 UTC m=+2.657359729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}\nI0730 18:21:12.970964       1 recorder_logging.go:35] &Event{ObjectMeta:{dummy.16269ac14d9ce1ed  dummy    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:SecretCreated,Message:Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2020-07-30 18:21:12.970912237 +0000 UTC m=+2.665922649,LastTimestamp:2020-07-30 18:21:12.970912237 +0000 UTC m=+2.665922649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,}\nI0730 18:21:12.970984       1 sync.go:72] [init mode] synced MachineConfigPools in 36.624721ms\nI0730 18:22:31.061316       1 sync.go:72] [init mode] synced MachineConfigDaemon in 1m18.084513588s\nI0730 18:22:36.129572       1 sync.go:72] [init mode] synced MachineConfigController in 5.064380487s\nI0730 18:22:38.223880       1 sync.go:72] [init mode] synced MachineConfigServer in 2.090631485s\nI0730 18:23:16.240622       1 sync.go:72] [init mode] synced RequiredPools in 38.013374883s\nI0730 18:23:16.274321       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"fc97063e-da8e-4104-b3de-727b7ba14814", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator version changed from [] to [{operator 0.0.1-2020-07-30-162507}]\nI0730 18:23:16.846354       1 sync.go:103] Initialization complete\nE0730 18:27:55.808041       1 leaderelection.go:320] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Jul 30 19:07:26.600 E ns/openshift-machine-config-operator pod/machine-config-daemon-r67jg node/ip-10-0-147-105.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 30 19:08:46.386 E ns/openshift-machine-config-operator pod/machine-config-daemon-p8bsd node/ip-10-0-179-241.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 30 19:09:26.024 E ns/openshift-machine-config-operator pod/machine-config-daemon-b7z5c node/ip-10-0-161-20.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 30 19:09:47.127 E ns/openshift-machine-config-operator pod/machine-config-daemon-ml46c node/ip-10-0-195-252.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 30 19:10:35.194 E ns/openshift-machine-config-operator pod/machine-config-daemon-7rbqn node/ip-10-0-199-86.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
Jul 30 19:11:52.510 E ns/openshift-machine-config-operator pod/machine-config-controller-54b446dc5b-nmrww node/ip-10-0-195-252.us-west-1.compute.internal container/machine-config-controller container exited with code 2 (Error): openshift.io/state = Done\nI0730 18:34:42.189949       1 node_controller.go:463] Pool worker: node ip-10-0-199-86.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-74e9df08e73ef10d77164aa44bf2c79b\nI0730 18:34:42.189970       1 node_controller.go:463] Pool worker: node ip-10-0-199-86.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-74e9df08e73ef10d77164aa44bf2c79b\nI0730 18:34:42.189976       1 node_controller.go:463] Pool worker: node ip-10-0-199-86.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0730 18:34:46.593246       1 node_controller.go:446] Pool worker: node ip-10-0-179-241.us-west-1.compute.internal is now reporting ready\nI0730 18:34:53.171396       1 node_controller.go:463] Pool worker: node ip-10-0-140-57.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-74e9df08e73ef10d77164aa44bf2c79b\nI0730 18:34:53.171419       1 node_controller.go:463] Pool worker: node ip-10-0-140-57.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-74e9df08e73ef10d77164aa44bf2c79b\nI0730 18:34:53.171425       1 node_controller.go:463] Pool worker: node ip-10-0-140-57.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0730 18:35:07.618809       1 node_controller.go:446] Pool worker: node ip-10-0-199-86.us-west-1.compute.internal is now reporting ready\nI0730 18:35:31.268198       1 node_controller.go:446] Pool worker: node ip-10-0-140-57.us-west-1.compute.internal is now reporting ready\nI0730 18:35:40.483621       1 node_controller.go:468] Pool worker: node ip-10-0-199-86.us-west-1.compute.internal changed labels\nI0730 18:35:48.411202       1 node_controller.go:468] Pool worker: node ip-10-0-140-57.us-west-1.compute.internal changed labels\nI0730 18:35:53.577860       1 node_controller.go:468] Pool worker: node ip-10-0-179-241.us-west-1.compute.internal changed labels\n
Jul 30 19:13:51.798 E ns/openshift-machine-config-operator pod/machine-config-server-lsmmt node/ip-10-0-147-105.us-west-1.compute.internal container/machine-config-server container exited with code 2 (Error): I0730 18:25:56.594032       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-122-gcd925586-dirty (cd925586f71d709a36a11945a7abf8104e3d2046)\nI0730 18:25:56.594611       1 api.go:69] Launching server on :22624\nI0730 18:25:56.594662       1 api.go:69] Launching server on :22623\nI0730 18:30:32.806494       1 api.go:116] Pool worker requested by address:"10.0.181.18:22566" User-Agent:"Ignition/2.3.0"\nI0730 18:30:44.694449       1 api.go:116] Pool worker requested by address:"10.0.241.49:31673" User-Agent:"Ignition/2.3.0"\n
Jul 30 19:13:54.810 E ns/openshift-machine-config-operator pod/machine-config-server-vrmt9 node/ip-10-0-161-20.us-west-1.compute.internal container/machine-config-server container exited with code 2 (Error): I0730 18:25:59.743835       1 start.go:38] Version: machine-config-daemon-4.6.0-202006240615.p0-122-gcd925586-dirty (cd925586f71d709a36a11945a7abf8104e3d2046)\nI0730 18:25:59.744670       1 api.go:69] Launching server on :22624\nI0730 18:25:59.744715       1 api.go:69] Launching server on :22623\nI0730 18:30:36.065430       1 api.go:116] Pool worker requested by address:"10.0.241.49:60498" User-Agent:"Ignition/2.3.0"\n
Jul 30 19:14:04.954 E ns/openshift-monitoring pod/prometheus-adapter-7885c7ccd7-8qgw7 node/ip-10-0-199-86.us-west-1.compute.internal container/prometheus-adapter container exited with code 2 (Error): ynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0730 18:51:27.342241       1 secure_serving.go:178] Serving securely on [::]:6443\nI0730 18:51:27.342601       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nE0730 18:52:29.896477       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0730 18:52:29.896633       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0730 19:06:00.663362       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0730 19:06:00.663498       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0730 19:06:00.680161       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0730 19:06:00.680278       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\n
Jul 30 19:14:05.071 E ns/openshift-marketplace pod/community-operators-d58df7bf9-c9vn8 node/ip-10-0-199-86.us-west-1.compute.internal container/community-operators container exited with code 2 (Error): 
Jul 30 19:14:05.302 E ns/openshift-marketplace pod/certified-operators-7777f47c75-488x9 node/ip-10-0-199-86.us-west-1.compute.internal container/certified-operators container exited with code 2 (Error): 
Jul 30 19:14:05.337 E ns/openshift-marketplace pod/redhat-operators-766bb6ffc7-xlbc8 node/ip-10-0-199-86.us-west-1.compute.internal container/redhat-operators container exited with code 2 (Error): 
Jul 30 19:14:06.097 E ns/openshift-monitoring pod/openshift-state-metrics-7cfc789869-k6vgf node/ip-10-0-199-86.us-west-1.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
Jul 30 19:14:06.145 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-86.us-west-1.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/07/30 18:51:42 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jul 30 19:14:06.145 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-86.us-west-1.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/07/30 18:51:43 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/30 18:51:43 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 18:51:43 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 18:51:43 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/30 18:51:43 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 18:51:43 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/30 18:51:43 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 18:51:43 main.go:155: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/30 18:51:43 http.go:107: HTTPS: listening on [::]:9091\nI0730 18:51:43.936303       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/30 19:09:44 oauthproxy.go:782: basicauth: 10.128.0.90:50362 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 30 19:14:06.145 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-86.us-west-1.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-30T18:51:42.272243342Z caller=main.go:87 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-07-30T18:51:42.274068109Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-30T18:51:47.914202636Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-07-30T18:51:47.914323506Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Jul 30 19:14:09.470 E ns/openshift-machine-config-operator pod/machine-config-operator-59bdbfb44d-9mf6x node/ip-10-0-161-20.us-west-1.compute.internal container/machine-config-operator container exited with code 2 (Error): I0730 19:05:27.318045       1 start.go:46] Version: 0.0.1-2020-07-30-162805 (Raw: machine-config-daemon-4.6.0-202006240615.p0-122-gcd925586-dirty, Hash: cd925586f71d709a36a11945a7abf8104e3d2046)\nI0730 19:05:27.319830       1 leaderelection.go:242] attempting to acquire leader lease  openshift-machine-config-operator/machine-config...\nI0730 19:07:25.133855       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0730 19:07:25.659997       1 operator.go:270] Starting MachineConfigOperator\nI0730 19:07:25.668017       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"fc97063e-da8e-4104-b3de-727b7ba14814", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-07-30-162507}] to [{operator 0.0.1-2020-07-30-162805}]\n
Jul 30 19:14:09.553 E ns/openshift-machine-api pod/machine-api-operator-865fc8d95b-skk48 node/ip-10-0-161-20.us-west-1.compute.internal container/machine-api-operator container exited with code 2 (Error): 
Jul 30 19:14:37.338 E ns/openshift-monitoring pod/kube-state-metrics-64bb77c68f-vmbrk node/ip-10-0-179-241.us-west-1.compute.internal container/kube-state-metrics container exited with code 255 (Error): 
Jul 30 19:14:39.225 E ns/openshift-dns-operator pod/dns-operator-6bf99969f7-mkm27 node/ip-10-0-195-252.us-west-1.compute.internal container/dns-operator container exited with code 1 (Error): time="2020-07-30T19:14:38Z" level=fatal msg="failed to create operator: failed to create operator manager: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Jul 30 19:14:40.950 E ns/openshift-authentication pod/oauth-openshift-674b7f5569-tml88 node/ip-10-0-147-105.us-west-1.compute.internal container/oauth-openshift container exited with code 255 (Error): Copying system trust bundle\nI0730 19:14:39.632075       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key"\nI0730 19:14:39.632540       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps.ci-op-yw8l6h6h-3ef74.origin-ci-int-aws.dev.rhcloud.com::/var/config/system/secrets/v4-0-config-system-router-certs/apps.ci-op-yw8l6h6h-3ef74.origin-ci-int-aws.dev.rhcloud.com"\nF0730 19:14:40.047023       1 cmd.go:49] unable to load configmap based request-header-client-ca-file: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused\n
Jul 30 19:14:43.385 E ns/openshift-monitoring pod/prometheus-adapter-7885c7ccd7-sj4w7 node/ip-10-0-179-241.us-west-1.compute.internal container/prometheus-adapter container exited with code 255 (Error): I0730 19:14:43.029441       1 adapter.go:94] successfully using in-cluster auth\nF0730 19:14:43.032190       1 adapter.go:286] unable to install resource metrics API: unable to construct dynamic discovery mapper: unable to populate initial set of REST mappings: Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused\n
Jul 30 19:15:33.057 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jul 30 19:15:40.664 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-179-241.us-west-1.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-07-30T19:14:53.545Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Jul 30 19:16:10.938 E clusteroperator/authentication changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver
Jul 30 19:16:10.940 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
Jul 30 19:16:53.459 E ns/openshift-machine-api pod/machine-api-operator-865fc8d95b-5jv75 node/ip-10-0-195-252.us-west-1.compute.internal container/machine-api-operator container exited with code 2 (Error): 
Jul 30 19:17:41.821 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-161-20.us-west-1.compute.internal node/ip-10-0-161-20.us-west-1.compute.internal container/kube-controller-manager container exited with code 255 (Error): cal/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c\n\ngoroutine 731 [chan receive]:\nk8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).updateUnfinishedWorkLoop(0xc000704f60)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:198 +0xe0\ncreated by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newQueue\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:58 +0x132\n\ngoroutine 639 [chan receive]:\nk8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1(0x4fab040, 0xc0009160c0, 0xc0006eccf0)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:301 +0xc5\ncreated by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/record/event.go:299 +0x6e\n\ngoroutine 725 [select]:\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc00047ff80, 0xc000d65ed0, 0x5016340, 0xc000330b80)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0xb7\ncreated by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c\n\ngoroutine 785 [select]:\nk8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000716960)\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:231 +0x405\ncreated by k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.newDelayingQueue\n	/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:68 +0x184\n
Jul 30 19:17:45.782 E ns/openshift-monitoring pod/prometheus-operator-65dc6498cb-6vr6p node/ip-10-0-161-20.us-west-1.compute.internal container/prometheus-operator container exited with code 1 (Error): ts=2020-07-30T19:17:44.817626612Z caller=main.go:217 msg="Starting Prometheus Operator version '0.40.0'."\nlevel=info ts=2020-07-30T19:17:44.827204561Z caller=server.go:54 msg="enabling server side TLS"\nts=2020-07-30T19:17:44.830862591Z caller=main.go:114 msg="Starting secure server on [::]:8080"\nts=2020-07-30T19:17:44.833422799Z caller=main.go:385 msg="Unhandled error received. Exiting..." err="communicating with server failed: Get https://172.30.0.1:443/version?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused"\n
Jul 30 19:18:14.355 E clusteroperator/monitoring changed Degraded to True: UpdatingPrometheusK8SFailed: Failed to rollout the stack. Error: running task Updating Prometheus-k8s failed: reconciling Prometheus rules PrometheusRule failed: updating PrometheusRule object failed: Internal error occurred: failed calling webhook "prometheusrules.openshift.io": Post https://prometheus-operator.openshift-monitoring.svc:8080/admission-prometheusrules/validate?timeout=5s: no endpoints available for service "prometheus-operator"
Jul 30 19:19:10.377 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-179-241.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): 2020/07/30 18:52:06 Watching directory: "/etc/alertmanager/config"\n
Jul 30 19:19:10.377 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-179-241.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/07/30 18:52:09 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 18:52:09 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 18:52:09 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 18:52:09 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/30 18:52:09 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 18:52:09 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 18:52:09 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 18:52:09 http.go:107: HTTPS: listening on [::]:9095\nI0730 18:52:09.236553       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/30 18:53:00 provider.go:403: authorizer reason: \n
Jul 30 19:19:10.416 E ns/openshift-marketplace pod/redhat-operators-766bb6ffc7-6jnfj node/ip-10-0-179-241.us-west-1.compute.internal container/redhat-operators container exited with code 2 (Error): 
Jul 30 19:19:10.444 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-5c45fbc858-2cp4s node/ip-10-0-179-241.us-west-1.compute.internal container/aws-ebs-csi-driver-operator container exited with code 1 (Error): 
Jul 30 19:19:10.506 E ns/openshift-monitoring pod/kube-state-metrics-64bb77c68f-vmbrk node/ip-10-0-179-241.us-west-1.compute.internal container/kube-state-metrics container exited with code 2 (Error): 
Jul 30 19:19:10.595 E ns/openshift-monitoring pod/openshift-state-metrics-7cfc789869-cxn4l node/ip-10-0-179-241.us-west-1.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
Jul 30 19:19:11.485 E ns/openshift-marketplace pod/community-operators-d58df7bf9-sgnst node/ip-10-0-179-241.us-west-1.compute.internal container/community-operators container exited with code 2 (Error): 
Jul 30 19:19:11.504 E ns/openshift-monitoring pod/thanos-querier-b546fcfbb-qfq4x node/ip-10-0-179-241.us-west-1.compute.internal container/oauth-proxy container exited with code 2 (Error): 2020/07/30 19:14:41 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/30 19:14:41 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 19:14:41 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 19:14:41 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/30 19:14:41 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 19:14:41 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/30 19:14:41 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 19:14:41 main.go:155: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/30 19:14:41 http.go:107: HTTPS: listening on [::]:9091\nI0730 19:14:41.140564       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/30 19:17:46 oauthproxy.go:782: basicauth: 10.130.0.37:38292 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 30 19:19:11.523 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-98ffd5949-q6trg node/ip-10-0-179-241.us-west-1.compute.internal container/snapshot-controller container exited with code 2 (Error): 
Jul 30 19:19:11.540 E ns/openshift-marketplace pod/certified-operators-7777f47c75-4948w node/ip-10-0-179-241.us-west-1.compute.internal container/certified-operators container exited with code 2 (Error): 
Jul 30 19:19:11.566 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-5f6b5c9c46-6sgzz node/ip-10-0-179-241.us-west-1.compute.internal container/operator container exited with code 1 (Error): lusterOperator (20m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0730 19:19:03.505711       1 base_controller.go:136] Shutting down StatusSyncer_csi-snapshot-controller ...\nI0730 19:19:03.505725       1 base_controller.go:136] Shutting down ManagementStateController ...\nI0730 19:19:03.505787       1 reflector.go:181] Stopping reflector *v1.CSISnapshotController (20m0s) from github.com/openshift/client-go/operator/informers/externalversions/factory.go:101\nI0730 19:19:03.505855       1 reflector.go:181] Stopping reflector *v1.Deployment (23m48.340420924s) from k8s.io/client-go/informers/factory.go:135\nI0730 19:19:03.505922       1 reflector.go:181] Stopping reflector *v1beta1.CustomResourceDefinition (24m22.558310012s) from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117\nI0730 19:19:03.505959       1 builder.go:263] server exited\nI0730 19:19:03.506007       1 reflector.go:181] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206\nI0730 19:19:03.506093       1 reflector.go:181] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206\nI0730 19:19:03.506145       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0730 19:19:03.506168       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0730 19:19:03.506251       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController\nI0730 19:19:03.506280       1 dynamic_serving_content.go:145] Shutting down serving-cert::/tmp/serving-cert-506476992/tls.crt::/tmp/serving-cert-506476992/tls.key\nI0730 19:19:03.506493       1 secure_serving.go:222] Stopped listening on [::]:8443\nW0730 19:19:03.506578       1 builder.go:97] graceful termination failed, controllers failed with error: stopped\n
Jul 30 19:19:23.127 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-199-86.us-west-1.compute.internal container/prometheus container exited with code 2 (Error): level=error ts=2020-07-30T19:19:20.259Z caller=main.go:283 msg="Error loading config (--config.file=/etc/prometheus/config_out/prometheus.env.yaml)" err="open /etc/prometheus/config_out/prometheus.env.yaml: no such file or directory"\n
Jul 30 19:19:33.589 E ns/e2e-k8s-sig-apps-job-upgrade-1750 pod/foo-vqxrj node/ip-10-0-179-241.us-west-1.compute.internal container/c container exited with code 137 (Error): 
Jul 30 19:19:47.136 E ns/openshift-console pod/console-7ff747bf47-kmnl8 node/ip-10-0-147-105.us-west-1.compute.internal container/console container exited with code 2 (Error): 2020-07-30T18:51:47Z cmd/main: Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\n2020-07-30T18:51:47Z cmd/main: cookies are secure!\n2020-07-30T18:51:47Z cmd/main: Binding to [::]:8443...\n2020-07-30T18:51:47Z cmd/main: using TLS\n
Jul 30 19:20:01.075 E ns/openshift-marketplace pod/marketplace-operator-64c87b59c8-jfr6s node/ip-10-0-195-252.us-west-1.compute.internal container/marketplace-operator container exited with code 1 (Error): 
Jul 30 19:21:39.799 E ns/openshift-marketplace pod/community-operators-d58df7bf9-876rg node/ip-10-0-199-86.us-west-1.compute.internal container/community-operators container exited with code 2 (Error): 
Jul 30 19:23:35.989 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-57.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): 2020/07/30 19:14:20 Watching directory: "/etc/alertmanager/config"\n
Jul 30 19:23:35.989 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-57.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/07/30 19:14:20 provider.go:119: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 19:14:20 provider.go:124: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/30 19:14:20 provider.go:313: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/30 19:14:20 oauthproxy.go:202: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/30 19:14:20 oauthproxy.go:223: compiled skip-auth-regex => "^/metrics"\n2020/07/30 19:14:20 oauthproxy.go:229: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/30 19:14:20 oauthproxy.go:239: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/30 19:14:20 http.go:107: HTTPS: listening on [::]:9095\nI0730 19:14:20.519928       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 30 19:23:36.035 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7c54c976df-k2xbn node/ip-10-0-140-57.us-west-1.compute.internal container/csi-provisioner container exited with code 2 (Error): 
Jul 30 19:23:36.035 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7c54c976df-k2xbn node/ip-10-0-140-57.us-west-1.compute.internal container/csi-driver container exited with code 2 (Error): 
Jul 30 19:23:36.035 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7c54c976df-k2xbn node/ip-10-0-140-57.us-west-1.compute.internal container/csi-snapshotter container exited with code 255 (Error): Lost connection to CSI driver, exiting
Jul 30 19:23:36.035 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7c54c976df-k2xbn node/ip-10-0-140-57.us-west-1.compute.internal container/csi-attacher container exited with code 255 (Error): Lost connection to CSI driver, exiting
Jul 30 19:23:36.035 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-7c54c976df-k2xbn node/ip-10-0-140-57.us-west-1.compute.internal container/csi-resizer container exited with code 255 (Error): Lost connection to CSI driver, exiting
Jul 30 19:23:36.230 E ns/openshift-monitoring pod/prometheus-adapter-7885c7ccd7-z26xp node/ip-10-0-140-57.us-west-1.compute.internal container/prometheus-adapter container exited with code 2 (Error): I0730 18:51:01.502482       1 adapter.go:94] successfully using in-cluster auth\nI0730 18:51:03.983476       1 dynamic_cafile_content.go:166] Starting request-header::/etc/tls/private/requestheader-client-ca-file\nI0730 18:51:03.983529       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/tls/private/client-ca-file\nI0730 18:51:03.984112       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\nI0730 18:51:03.985768       1 secure_serving.go:178] Serving securely on [::]:6443\nI0730 18:51:03.985936       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nE0730 18:52:29.886767       1 webhook.go:197] Failed to make webhook authorizer request: subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nE0730 18:52:29.886908       1 errors.go:77] subjectaccessreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:openshift-monitoring:prometheus-adapter" cannot create resource "subjectaccessreviews" in API group "authorization.k8s.io" at the cluster scope\nW0730 19:17:26.919379       1 reflector.go:326] k8s.io/client-go/informers/factory.go:135: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received\n
Jul 30 19:23:36.271 E ns/openshift-monitoring pod/grafana-548b7fd787-dcghq node/ip-10-0-140-57.us-west-1.compute.internal container/grafana-proxy container exited with code 2 (Error): 
Jul 30 19:23:37.274 E ns/openshift-monitoring pod/telemeter-client-6b858b8cd-4c4ld node/ip-10-0-140-57.us-west-1.compute.internal container/telemeter-client container exited with code 2 (Error): 
Jul 30 19:23:37.274 E ns/openshift-monitoring pod/telemeter-client-6b858b8cd-4c4ld node/ip-10-0-140-57.us-west-1.compute.internal container/reload container exited with code 2 (Error): 
Jul 30 19:23:37.310 E ns/openshift-kube-storage-version-migrator pod/migrator-7fff7dbd4-qr27g node/ip-10-0-140-57.us-west-1.compute.internal container/migrator container exited with code 2 (Error): I0730 18:49:53.541196       1 migrator.go:18] FLAG: --add_dir_header="false"\nI0730 18:49:53.541316       1 migrator.go:18] FLAG: --alsologtostderr="true"\nI0730 18:49:53.541325       1 migrator.go:18] FLAG: --kube-api-burst="1000"\nI0730 18:49:53.541334       1 migrator.go:18] FLAG: --kube-api-qps="40"\nI0730 18:49:53.541343       1 migrator.go:18] FLAG: --kubeconfig=""\nI0730 18:49:53.541352       1 migrator.go:18] FLAG: --log_backtrace_at=":0"\nI0730 18:49:53.541362       1 migrator.go:18] FLAG: --log_dir=""\nI0730 18:49:53.541370       1 migrator.go:18] FLAG: --log_file=""\nI0730 18:49:53.541375       1 migrator.go:18] FLAG: --log_file_max_size="1800"\nI0730 18:49:53.541383       1 migrator.go:18] FLAG: --logtostderr="true"\nI0730 18:49:53.541390       1 migrator.go:18] FLAG: --skip_headers="false"\nI0730 18:49:53.541397       1 migrator.go:18] FLAG: --skip_log_headers="false"\nI0730 18:49:53.541404       1 migrator.go:18] FLAG: --stderrthreshold="2"\nI0730 18:49:53.541410       1 migrator.go:18] FLAG: --v="2"\nI0730 18:49:53.541417       1 migrator.go:18] FLAG: --vmodule=""\nI0730 18:49:53.543107       1 reflector.go:175] Starting reflector *v1alpha1.StorageVersionMigration (0s) from k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125\nI0730 19:17:26.915504       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0730 19:19:54.884615       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0730 19:19:54.944313       1 reflector.go:402] k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125: watch of *v1alpha1.StorageVersionMigration ended with: very short watch: k8s.io/client-go@v0.18.0-beta.2/tools/cache/reflector.go:125: Unexpected watch close - watch lasted less than a second and no items received\n
Jul 30 19:24:04.436 E ns/e2e-k8s-sig-apps-job-upgrade-1750 pod/foo-hjzv4 node/ip-10-0-140-57.us-west-1.compute.internal container/c container exited with code 137 (Error):