ResultSUCCESS
Tests 2 failed / 23 succeeded
Started2020-03-05 00:29
Elapsed1h23m
Work namespaceci-op-p6dgihk0
Refs openshift-4.5:d61ae9e1
38:b98820ec
pod4ce49e32-5e78-11ea-be23-0a58ac105b6b
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Kubernetes and OpenShift APIs remain available 32m30s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sand\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 7s of 32m29s (0%):

Mar 05 01:36:24.589 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: unexpected EOF
Mar 05 01:36:24.589 E kube-apiserver Kube API started failing: Get https://api.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: unexpected EOF
Mar 05 01:36:25.228 E openshift-apiserver OpenShift API is not responding to GET requests
Mar 05 01:36:25.228 - 5s    E kube-apiserver Kube API is not responding to GET requests
Mar 05 01:36:25.456 I openshift-apiserver OpenShift API started responding to GET requests
Mar 05 01:36:31.521 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1583372546.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 33m35s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
316 error level events were detected during this test run:

Mar 05 01:10:19.390 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-6964695d6c" has successfully progressed.
Mar 05 01:10:45.916 E ns/openshift-etcd-operator pod/etcd-operator-6b5b997787-txqht node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:11:31.042 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-756944cfdf-t5rfn node/ip-10-0-130-16.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): uler-operator", UID:"f354c277-4e96-4bb9-8f66-0e9dbbee6fde", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-137-81.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-137-81.us-west-1.compute.internal container=\"scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0305 01:11:30.008243       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:11:30.009190       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0305 01:11:30.009220       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0305 01:11:30.009239       1 base_controller.go:74] Shutting down NodeController ...\nI0305 01:11:30.009257       1 base_controller.go:74] Shutting down PruneController ...\nI0305 01:11:30.009273       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0305 01:11:30.009288       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0305 01:11:30.009304       1 base_controller.go:74] Shutting down InstallerController ...\nI0305 01:11:30.009320       1 base_controller.go:74] Shutting down  ...\nI0305 01:11:30.009335       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0305 01:11:30.009348       1 status_controller.go:212] Shutting down StatusSyncer-kube-scheduler\nI0305 01:11:30.009366       1 base_controller.go:74] Shutting down RevisionController ...\nI0305 01:11:30.009393       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0305 01:11:30.009410       1 target_config_reconciler.go:124] Shutting down TargetConfigReconciler\nI0305 01:11:30.009424       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nF0305 01:11:30.009694       1 builder.go:243] stopped\nF0305 01:11:30.014207       1 builder.go:209] server exited\n
Mar 05 01:11:58.173 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-56876d9fb-gt98h node/ip-10-0-130-16.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): .81:2379,https://10.0.147.82:2379\nI0305 01:11:52.983475       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"780e1902-546b-4734-97b6-bfa87d42afa7", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ObserveStorageUpdated' Updated storage urls to https://10.0.130.16:2379,https://10.0.137.81:2379,https://10.0.147.82:2379\nI0305 01:11:57.336745       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:11:57.337317       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0305 01:11:57.337503       1 base_controller.go:73] Shutting down RevisionController ...\nI0305 01:11:57.337573       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0305 01:11:57.337628       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0305 01:11:57.337677       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0305 01:11:57.337726       1 state_controller.go:171] Shutting down EncryptionStateController\nI0305 01:11:57.337786       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0305 01:11:57.337858       1 prune_controller.go:232] Shutting down PruneController\nI0305 01:11:57.337909       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0305 01:11:57.337995       1 finalizer_controller.go:148] Shutting down NamespaceFinalizerController_openshift-apiserver\nI0305 01:11:57.338046       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0305 01:11:57.338099       1 base_controller.go:73] Shutting down  ...\nI0305 01:11:57.338176       1 base_controller.go:73] Shutting down LoggingSyncer ...\nI0305 01:11:57.338230       1 base_controller.go:73] Shutting down UnsupportedConfigOverridesController ...\nI0305 01:11:57.338291       1 status_controller.go:212] Shutting down StatusSyncer-openshift-apiserver\nF0305 01:11:57.338360       1 builder.go:243] stopped\n
Mar 05 01:12:36.595 E ns/openshift-apiserver pod/apiserver-f6c46f758-4t67f node/ip-10-0-137-81.us-west-1.compute.internal container=openshift-apiserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:12:48.969 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: reconciling Grafana Dashboard Definitions ConfigMaps failed: retrieving ConfigMap object failed: rpc error: code = Unavailable desc = transport is closing
Mar 05 01:13:27.560 E ns/openshift-machine-api pod/machine-api-operator-5f666b88cf-kszq9 node/ip-10-0-130-16.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Mar 05 01:13:38.173 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=scheduler container exited with code 255 (OOMKilled): e.notRegisteredErr{schemeName:"k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30", gvk:schema.GroupVersionKind{Group:"", Version:"", Kind:""}, target:runtime.GroupVersioner(nil), t:(*reflect.rtype)(0x1a362e0)}\nE0305 01:13:35.272560       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)\nE0305 01:13:35.290775       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0305 01:13:35.295143       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0305 01:13:35.295215       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0305 01:13:35.362504       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: unknown (get pods)\nE0305 01:13:35.368679       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0305 01:13:35.368746       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0305 01:13:35.368832       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0305 01:13:35.368890       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0305 01:13:35.370578       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nI0305 01:13:36.915979       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0305 01:13:36.916121       1 server.go:257] leaderelection lost\n
Mar 05 01:14:23.151 E ns/openshift-machine-api pod/machine-api-controllers-858565d9f8-j2srv node/ip-10-0-147-82.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Mar 05 01:15:55.788 E ns/openshift-cluster-machine-approver pod/machine-approver-b787b9cc5-thgfv node/ip-10-0-130-16.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): : internal error\nI0305 00:58:56.843886       1 csr_check.go:183] Falling back to machine-api authorization for ip-10-0-141-142.us-west-1.compute.internal\nI0305 00:58:56.951315       1 main.go:196] CSR csr-2rwmc approved\nE0305 01:01:40.154645       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=12614&timeoutSeconds=582&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0305 01:01:41.155344       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nI0305 01:03:23.993243       1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0305 01:03:24.002600       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=16642&timeoutSeconds=431&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0305 01:03:25.003295       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0305 01:13:26.685059       1 reflector.go:270] github.com/openshift/cluster-machine-approver/main.go:238: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?resourceVersion=17451&timeoutSeconds=319&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
Mar 05 01:15:59.543 E ns/openshift-insights pod/insights-operator-595b477945-n2q7d node/ip-10-0-147-82.us-west-1.compute.internal container=operator container exited with code 2 (Error): 2 10.128.2.13:60392]\nI0305 01:12:24.136377       1 httplog.go:90] GET /metrics: (6.793503ms) 200 [Prometheus/2.15.2 10.131.0.20:38950]\nI0305 01:12:33.110398       1 httplog.go:90] GET /metrics: (5.591834ms) 200 [Prometheus/2.15.2 10.128.2.13:60392]\nI0305 01:12:54.137062       1 httplog.go:90] GET /metrics: (6.332821ms) 200 [Prometheus/2.15.2 10.131.0.20:38950]\nI0305 01:13:03.107780       1 httplog.go:90] GET /metrics: (2.840881ms) 200 [Prometheus/2.15.2 10.128.2.13:60392]\nI0305 01:13:12.109780       1 status.go:298] The operator is healthy\nI0305 01:13:19.612792       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 1 items received\nI0305 01:13:24.139119       1 httplog.go:90] GET /metrics: (9.520714ms) 200 [Prometheus/2.15.2 10.131.0.20:38950]\nI0305 01:13:33.108653       1 httplog.go:90] GET /metrics: (3.610693ms) 200 [Prometheus/2.15.2 10.128.2.13:60392]\nI0305 01:13:54.136528       1 httplog.go:90] GET /metrics: (7.050525ms) 200 [Prometheus/2.15.2 10.131.0.20:38950]\nI0305 01:14:03.108959       1 httplog.go:90] GET /metrics: (4.008924ms) 200 [Prometheus/2.15.2 10.128.2.13:60392]\nI0305 01:14:24.135401       1 httplog.go:90] GET /metrics: (5.93811ms) 200 [Prometheus/2.15.2 10.131.0.20:38950]\nI0305 01:14:33.108011       1 httplog.go:90] GET /metrics: (1.996217ms) 200 [Prometheus/2.15.2 10.128.2.13:60392]\nI0305 01:14:54.135335       1 httplog.go:90] GET /metrics: (5.897661ms) 200 [Prometheus/2.15.2 10.131.0.20:38950]\nI0305 01:15:03.107333       1 httplog.go:90] GET /metrics: (2.183073ms) 200 [Prometheus/2.15.2 10.128.2.13:60392]\nI0305 01:15:12.111863       1 status.go:298] The operator is healthy\nI0305 01:15:24.137323       1 httplog.go:90] GET /metrics: (7.735131ms) 200 [Prometheus/2.15.2 10.131.0.20:38950]\nI0305 01:15:33.108223       1 httplog.go:90] GET /metrics: (3.188942ms) 200 [Prometheus/2.15.2 10.128.2.13:60392]\nI0305 01:15:54.136147       1 httplog.go:90] GET /metrics: (6.585927ms) 200 [Prometheus/2.15.2 10.131.0.20:38950]\n
Mar 05 01:15:59.979 E ns/openshift-kube-storage-version-migrator pod/migrator-6f45db8764-mhz6v node/ip-10-0-149-63.us-west-1.compute.internal container=migrator container exited with code 2 (Error): 
Mar 05 01:16:23.577 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* deployment openshift-authentication-operator/authentication-operator is progressing NewReplicaSetAvailable: ReplicaSet "authentication-operator-5c6cd9f8d9" has successfully progressed.\n* deployment openshift-cluster-samples-operator/cluster-samples-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-samples-operator-769ff4c7cd" has successfully progressed.\n* deployment openshift-console/downloads is progressing NewReplicaSetAvailable: ReplicaSet "downloads-6ddbdd6446" has successfully progressed.\n* deployment openshift-controller-manager-operator/openshift-controller-manager-operator is progressing ReplicaSetUpdated: ReplicaSet "openshift-controller-manager-operator-54db5449c9" is progressing.\n* deployment openshift-csi-snapshot-controller-operator/csi-snapshot-controller-operator is progressing NewReplicaSetAvailable: ReplicaSet "csi-snapshot-controller-operator-5979cc7fd4" has successfully progressed.\n* deployment openshift-image-registry/cluster-image-registry-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-image-registry-operator-f49b65d54" has successfully progressed.\n* deployment openshift-ingress-operator/ingress-operator is progressing NewReplicaSetAvailable: ReplicaSet "ingress-operator-67d5566659" has successfully progressed.\n* deployment openshift-machine-api/cluster-autoscaler-operator is progressing ReplicaSetUpdated: ReplicaSet "cluster-autoscaler-operator-c9855dbcf" is progressing.\n* deployment openshift-marketplace/marketplace-operator is progressing NewReplicaSetAvailable: ReplicaSet "marketplace-operator-865fcf57dd" has successfully progressed.\n* deployment openshift-operator-lifecycle-manager/olm-operator is progressing ReplicaSetUpdated: ReplicaSet "olm-operator-6d84ff7695" is progressing.\n* deployment openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator is progressing ReplicaSetUpdated: ReplicaSet "openshift-service-catalog-apiserver-operator-69c465b87f" is progressing.\n* deployment openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator is progressing ReplicaSetUpdated: ReplicaSet "openshift-service-catalog-controller-manager-operator-6654b5c668" is progressing.
Mar 05 01:16:32.068 E ns/openshift-monitoring pod/kube-state-metrics-7d47d9fb8d-bw57n node/ip-10-0-149-63.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Mar 05 01:16:33.225 E ns/openshift-monitoring pod/prometheus-operator-57c4f75c85-kr7fx node/ip-10-0-137-81.us-west-1.compute.internal container=prometheus-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:33.301 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-69fbcplmr node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:43.114 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-5979cc7fd4-v6hr2 node/ip-10-0-149-63.us-west-1.compute.internal container=operator container exited with code 255 (Error): se (""),Available changed from False to True ("")\nI0305 00:59:43.194483       1 operator.go:147] Finished syncing operator at 42.874933ms\nI0305 01:07:16.807841       1 operator.go:145] Starting syncing operator at 2020-03-05 01:07:16.807804047 +0000 UTC m=+466.873208188\nI0305 01:07:17.264146       1 operator.go:147] Finished syncing operator at 456.330823ms\nI0305 01:07:17.347325       1 operator.go:145] Starting syncing operator at 2020-03-05 01:07:17.347314862 +0000 UTC m=+467.412718956\nI0305 01:07:17.425398       1 operator.go:147] Finished syncing operator at 78.074459ms\nI0305 01:07:17.425448       1 operator.go:145] Starting syncing operator at 2020-03-05 01:07:17.425441758 +0000 UTC m=+467.490845733\nI0305 01:07:17.494143       1 operator.go:147] Finished syncing operator at 68.693999ms\nI0305 01:13:28.465659       1 operator.go:145] Starting syncing operator at 2020-03-05 01:13:28.465638738 +0000 UTC m=+838.531042623\nI0305 01:13:28.879995       1 operator.go:147] Finished syncing operator at 414.347042ms\nI0305 01:13:28.960048       1 operator.go:145] Starting syncing operator at 2020-03-05 01:13:28.960032939 +0000 UTC m=+839.025437028\nI0305 01:13:29.031051       1 operator.go:147] Finished syncing operator at 71.004986ms\nI0305 01:13:29.031110       1 operator.go:145] Starting syncing operator at 2020-03-05 01:13:29.031103003 +0000 UTC m=+839.096507135\nI0305 01:13:29.074960       1 operator.go:147] Finished syncing operator at 43.848922ms\nI0305 01:16:42.488399       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:16:42.488840       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0305 01:16:42.489135       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0305 01:16:42.489160       1 logging_controller.go:93] Shutting down LogLevelController\nI0305 01:16:42.489176       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nF0305 01:16:42.489265       1 builder.go:243] stopped\n
Mar 05 01:16:43.160 E ns/openshift-monitoring pod/prometheus-adapter-6b787d4fcb-jhtx2 node/ip-10-0-149-63.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0305 01:04:51.339277       1 adapter.go:93] successfully using in-cluster auth\nI0305 01:04:52.057083       1 secure_serving.go:116] Serving securely on [::]:6443\n
Mar 05 01:16:45.155 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-9fmtm node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:45.794 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-dsds7 node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:46.319 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-h6dk8 node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:46.913 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-5srbq node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:47.526 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-9v5wk node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:47.606 E ns/openshift-image-registry pod/node-ca-j7blw node/ip-10-0-130-16.us-west-1.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:48.257 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-744fg node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:48.725 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-sp79k node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:49.304 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-95lnr node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:49.961 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-n86r8 node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:50.547 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-j7jhw node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:50.870 E ns/openshift-image-registry pod/node-ca-b4jd7 node/ip-10-0-130-16.us-west-1.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:51.111 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-hlgrk node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:51.707 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-dvtcj node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:52.379 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-88gv7 node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:52.975 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-42cmz node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:53.507 E ns/openshift-image-registry pod/node-ca-6v2fv node/ip-10-0-130-16.us-west-1.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:53.529 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-9v5nn node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:54.105 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-wz4n6 node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:54.766 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-mkkx2 node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:55.319 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-9ggn8 node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:55.361 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-6cc7bc4c78-w98vq node/ip-10-0-142-46.us-west-1.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Mar 05 01:16:55.393 E ns/openshift-monitoring pod/grafana-f8dd6b7df-42j8v node/ip-10-0-142-46.us-west-1.compute.internal container=grafana container exited with code 1 (Error): 
Mar 05 01:16:55.393 E ns/openshift-monitoring pod/grafana-f8dd6b7df-42j8v node/ip-10-0-142-46.us-west-1.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Mar 05 01:16:55.908 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-bzt2z node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:56.506 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-kqt6f node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:57.145 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-25gl7 node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:57.704 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-dvbh6 node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:58.319 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-qrxq2 node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:58.903 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-z2xj8 node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:16:59.506 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-684xf node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:00.390 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-5ws5r node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:00.494 E ns/openshift-controller-manager pod/controller-manager-rvfkw node/ip-10-0-137-81.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): eam: stream error: stream ID 123; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:12:49.632153       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 3; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:12:49.632296       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 115; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:13:36.471572       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 581; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:13:36.471740       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 37; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:13:36.471834       1 reflector.go:340] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 579; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:13:36.472097       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 603; INTERNAL_ERROR") has prevented the request from succeeding\n
Mar 05 01:17:00.913 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-fvqjj node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:01.082 E ns/openshift-image-registry pod/node-ca-cr95c node/ip-10-0-130-16.us-west-1.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:01.566 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-9djqg node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:02.175 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-9rh4p node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:02.703 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-vqmbd node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:03.318 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-4wbww node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:03.646 E ns/openshift-controller-manager pod/controller-manager-fsvzq node/ip-10-0-130-16.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): I0305 00:59:53.701778       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0305 00:59:53.703162       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-p6dgihk0/stable-initial@sha256:a3d84f419db9032b07494e17bf5f6ee7a928c92e5c6ff959deef9dc128b865cc"\nI0305 00:59:53.703208       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0305 00:59:53.703220       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-p6dgihk0/stable-initial@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0305 00:59:53.703386       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nE0305 01:05:21.736951       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\nE0305 01:05:51.727709       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\nE0305 01:06:21.726850       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\nE0305 01:06:51.728476       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n
Mar 05 01:17:03.890 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-pwrcn node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:04.261 E ns/openshift-monitoring pod/thanos-querier-65787df88c-t2q98 node/ip-10-0-149-63.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/03/05 01:07:39 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/03/05 01:07:39 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/05 01:07:39 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/05 01:07:39 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/03/05 01:07:39 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/05 01:07:39 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/03/05 01:07:39 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/05 01:07:39 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0305 01:07:39.488548       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/03/05 01:07:39 http.go:107: HTTPS: listening on [::]:9091\n
Mar 05 01:17:04.495 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-ndvl6 node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:05.161 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-csfwv node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:05.705 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-6lqs8 node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:06.296 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-rc57q node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:06.917 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-p9hdr node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:07.499 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-wsffz node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:08.162 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-crdft node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:08.693 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-zv4vl node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:09.301 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-5kd9r node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:09.902 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-mdmtg node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:10.050 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-149-63.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/03/05 01:05:05 Watching directory: "/etc/alertmanager/config"\n
Mar 05 01:17:10.050 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-149-63.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/03/05 01:05:05 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/05 01:05:05 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/05 01:05:05 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/05 01:05:05 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/03/05 01:05:05 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/05 01:05:05 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/05 01:05:05 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/05 01:05:05 http.go:107: HTTPS: listening on [::]:9095\nI0305 01:05:05.684970       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Mar 05 01:17:10.517 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-k2smm node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:10.935 E ns/openshift-image-registry pod/node-ca-hnmw5 node/ip-10-0-130-16.us-west-1.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:11.101 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-22tmc node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:11.699 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-qnzhv node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:12.308 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-f7wzc node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:13.159 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-jjfgv node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:13.749 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-k6wt2 node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:14.350 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-lxbzc node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:14.940 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-b6c7n node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:15.535 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-wp7fw node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:16.135 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-x2p98 node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:16.696 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-679vs node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:17.299 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-jfmvq node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:17.925 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-wtn6n node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:18.510 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-wb6hw node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:18.845 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-63.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-03-05T01:16:48.220Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-03-05T01:16:48.226Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-03-05T01:16:48.227Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-03-05T01:16:48.227Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-03-05T01:16:48.228Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-03-05T01:16:48.228Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-03-05T01:16:48.228Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-03-05T01:16:48.228Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-03-05T01:16:48.228Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-03-05T01:16:48.228Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-03-05T01:16:48.228Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-03-05T01:16:48.228Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-03-05T01:16:48.228Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-03-05T01:16:48.228Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-03-05T01:16:48.229Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-03-05T01:16:48.229Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-03-05
Mar 05 01:17:19.117 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-fqkbh node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:19.808 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-mscdh node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:20.398 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-9xh7g node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:20.878 E ns/openshift-service-ca-operator pod/service-ca-operator-686fcc6759-phpmd node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:21.101 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-2jnjz node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:21.121 E ns/openshift-image-registry pod/node-ca-gmnv8 node/ip-10-0-130-16.us-west-1.compute.internal container=node-ca container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:21.734 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-h57rd node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:22.343 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-pswbp node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:22.913 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-jhbql node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:23.545 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-p8ftq node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:24.100 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-f466w node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:24.774 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-7rr4p node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:25.946 E ns/openshift-operator-lifecycle-manager pod/packageserver-68c7d5cfdf-pcb9w node/ip-10-0-130-16.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:25.964 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-8649w node/ip-10-0-142-46.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:28.876 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): t-signing-key-file string   Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key. (Requires the 'TokenRequest' feature gate.)\n      --service-cluster-ip-range string           A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.\n      --service-node-port-range portRange         A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767)\n\nGlobal flags:\n\n      --add-dir-header                   If true, adds the file directory to the header\n      --alsologtostderr                  log to standard error as well as files\n  -h, --help                             help for kube-apiserver\n      --log-backtrace-at traceLocation   when logging hits line file:N, emit a stack trace (default :0)\n      --log-dir string                   If non-empty, write log files in this directory\n      --log-file string                  If non-empty, use this log file\n      --log-file-max-size uint           Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)\n      --log-flush-frequency duration     Maximum number of seconds between log flushes (default 5s)\n      --logtostderr                      log to standard error instead of files (default true)\n      --skip-headers                     If true, avoid header prefixes in the log messages\n      --skip-log-headers                 If true, avoid headers when opening log files\n      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)\n  -v, --v Level                          number for the log level verbosity (default 0)\n      --version version[=true]           Print version information and quit\n      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging\n\n
Mar 05 01:17:28.887 E ns/openshift-monitoring pod/node-exporter-9rdpf node/ip-10-0-147-82.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:15:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:15:54Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:15:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:16:09Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:16:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:16:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:16:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:17:28.892 E ns/openshift-controller-manager pod/controller-manager-fkncx node/ip-10-0-147-82.us-west-1.compute.internal container=controller-manager container exited with code 137 (Error): I0305 01:00:42.934050       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0305 01:00:42.936388       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-p6dgihk0/stable-initial@sha256:a3d84f419db9032b07494e17bf5f6ee7a928c92e5c6ff959deef9dc128b865cc"\nI0305 01:00:42.936425       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-p6dgihk0/stable-initial@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0305 01:00:42.936528       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0305 01:00:42.938936       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Mar 05 01:17:29.584 E ns/openshift-image-registry pod/image-registry-7ddd68bfdd-mtsmd node/ip-10-0-141-142.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:35.665 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-46.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/03/05 01:07:47 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Mar 05 01:17:35.665 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-46.us-west-1.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/03/05 01:07:47 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/03/05 01:07:47 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/05 01:07:47 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/05 01:07:47 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/03/05 01:07:47 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/05 01:07:47 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/03/05 01:07:47 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/05 01:07:47 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/03/05 01:07:47 http.go:107: HTTPS: listening on [::]:9091\nI0305 01:07:47.533540       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/03/05 01:16:38 oauthproxy.go:774: basicauth: 10.128.2.19:49908 Authorization header does not start with 'Basic', skipping basic authentication\n
Mar 05 01:17:35.665 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-46.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-03-05T01:07:46.596661534Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-03-05T01:07:46.599074116Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-03-05T01:07:51.752413673Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-03-05T01:07:51.752504379Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Mar 05 01:17:35.672 E ns/openshift-console-operator pod/console-operator-6dcd8888cc-h8ln4 node/ip-10-0-137-81.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): eamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:17:26.660252       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:17:26.660710       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:17:26.662451       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:17:26.663102       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:17:26.664348       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:17:26.664812       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:17:26.665661       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:17:26.666097       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:17:35.017025       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:17:35.017465       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0305 01:17:35.017598       1 controller.go:70] Shutting down Console\nI0305 01:17:35.017624       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0305 01:17:35.017641       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0305 01:17:35.017665       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0305 01:17:35.017683       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0305 01:17:35.017697       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0305 01:17:35.017714       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0305 01:17:35.017728       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nF0305 01:17:35.017937       1 builder.go:243] stopped\n
Mar 05 01:17:37.668 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-46.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/03/05 01:05:12 Watching directory: "/etc/alertmanager/config"\n
Mar 05 01:17:37.668 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-142-46.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/03/05 01:05:13 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/05 01:05:13 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/05 01:05:13 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/05 01:05:13 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/03/05 01:05:13 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/05 01:05:13 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/05 01:05:13 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0305 01:05:13.195982       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/03/05 01:05:13 http.go:107: HTTPS: listening on [::]:9095\n2020/03/05 01:05:30 oauthproxy.go:782: requestauth: 10.131.0.18:43512 [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n2020/03/05 01:05:36 oauthproxy.go:782: requestauth: 10.131.0.18:43512 [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n2020/03/05 01:07:00 oauthproxy.go:782: requestauth: 10.131.0.18:43512 [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n2020/03/05 01:07:06 oauthproxy.go:782: requestauth: 10.131.0.18:43512 [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n
Mar 05 01:17:44.712 E ns/openshift-image-registry pod/image-registry-789df94df5-hp6xl node/ip-10-0-149-63.us-west-1.compute.internal container=registry container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:17:47.803 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-46.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-03-05T01:17:42.593Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-03-05T01:17:42.601Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-03-05T01:17:42.602Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-03-05T01:17:42.603Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-03-05T01:17:42.603Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-03-05T01:17:42.603Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-03-05T01:17:42.604Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-03-05T01:17:42.604Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-03-05T01:17:42.604Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-03-05
Mar 05 01:17:48.686 E ns/openshift-monitoring pod/thanos-querier-65787df88c-wnbbt node/ip-10-0-141-142.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/03/05 01:07:35 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/03/05 01:07:35 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/05 01:07:35 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/05 01:07:35 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/03/05 01:07:35 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/05 01:07:35 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/03/05 01:07:35 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/05 01:07:35 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/03/05 01:07:35 http.go:107: HTTPS: listening on [::]:9091\nI0305 01:07:35.508326       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Mar 05 01:17:55.746 E ns/openshift-monitoring pod/node-exporter-5h99k node/ip-10-0-137-81.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:16:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:16:35Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:16:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:05Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:20Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:39Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:54Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:18:00.735 E ns/openshift-marketplace pod/redhat-marketplace-b4f4f5d8-nd7xr node/ip-10-0-149-63.us-west-1.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Mar 05 01:18:04.846 E ns/openshift-marketplace pod/redhat-operators-7c8f5f6c67-b2z95 node/ip-10-0-142-46.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Mar 05 01:18:07.766 E ns/openshift-monitoring pod/node-exporter-mth8z node/ip-10-0-149-63.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:16:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:18:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:18:14.505 E ns/openshift-monitoring pod/node-exporter-ghqsw node/ip-10-0-141-142.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:59Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:18:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:18:14.839 E ns/openshift-marketplace pod/certified-operators-f77d6c559-4m7vr node/ip-10-0-149-63.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Mar 05 01:18:28.261 E ns/openshift-monitoring pod/node-exporter-k9grs node/ip-10-0-130-16.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:30Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:17:45Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:18:00Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:18:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:18:15Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:18:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:19:36.103 E ns/openshift-sdn pod/sdn-controller-6xmhv node/ip-10-0-137-81.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0305 00:49:40.380641       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0305 00:57:00.349016       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Mar 05 01:19:47.295 E ns/openshift-sdn pod/sdn-controller-jrtlz node/ip-10-0-147-82.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): pgrade-1223"\nI0305 01:08:55.516013       1 vnids.go:115] Allocated netid 7822920 for namespace "e2e-k8s-sig-apps-job-upgrade-9385"\nI0305 01:08:55.530102       1 vnids.go:115] Allocated netid 15570251 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-6677"\nI0305 01:08:55.546458       1 vnids.go:115] Allocated netid 5793472 for namespace "e2e-frontend-ingress-available-6466"\nI0305 01:08:55.567135       1 vnids.go:115] Allocated netid 4974768 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-4404"\nI0305 01:08:55.583733       1 vnids.go:115] Allocated netid 682052 for namespace "e2e-k8s-service-lb-available-2552"\nI0305 01:08:55.615031       1 vnids.go:115] Allocated netid 11111663 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-3303"\nI0305 01:08:55.626029       1 vnids.go:115] Allocated netid 15081741 for namespace "e2e-control-plane-available-1768"\nE0305 01:13:26.646737       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.NetNamespace: Get https://api-int.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/netnamespaces?allowWatchBookmarks=true&resourceVersion=20252&timeout=6m22s&timeoutSeconds=382&watch=true: dial tcp 10.0.157.223:6443: connect: connection refused\nE0305 01:13:26.648061       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://api-int.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=20518&timeout=9m7s&timeoutSeconds=547&watch=true: dial tcp 10.0.157.223:6443: connect: connection refused\nE0305 01:17:26.684228       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://api-int.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=20518&timeout=6m6s&timeoutSeconds=366&watch=true: dial tcp 10.0.137.104:6443: connect: connection refused\n
Mar 05 01:19:51.615 E ns/openshift-multus pod/multus-admission-controller-bql75 node/ip-10-0-130-16.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Mar 05 01:19:51.685 E ns/openshift-multus pod/multus-6mbz4 node/ip-10-0-130-16.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Mar 05 01:20:03.208 E ns/openshift-sdn pod/sdn-gnhrj node/ip-10-0-137-81.us-west-1.compute.internal container=sdn container exited with code 255 (Error):     2567 roundrobin.go:217] Delete endpoint 10.128.2.5:1936 for service "openshift-ingress/router-internal-default:metrics"\nI0305 01:18:32.627750    2567 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-ingress/router-internal-default:https to [10.128.2.36:443 10.131.0.53:443]\nI0305 01:18:32.627762    2567 roundrobin.go:217] Delete endpoint 10.128.2.5:443 for service "openshift-ingress/router-internal-default:https"\nI0305 01:18:32.869803    2567 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:18:32.869831    2567 proxier.go:347] userspace syncProxyRules took 78.766235ms\nI0305 01:18:33.168528    2567 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:18:33.168556    2567 proxier.go:347] userspace syncProxyRules took 76.338571ms\nI0305 01:18:37.016725    2567 pod.go:539] CNI_DEL openshift-console/console-d5658fbff-wbx5x\nI0305 01:19:03.434602    2567 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:19:03.434629    2567 proxier.go:347] userspace syncProxyRules took 79.742724ms\nI0305 01:19:20.561980    2567 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.2:6443 10.130.0.3:6443]\nI0305 01:19:20.562113    2567 roundrobin.go:217] Delete endpoint 10.129.0.16:6443 for service "openshift-multus/multus-admission-controller:"\nI0305 01:19:20.912481    2567 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:19:20.912513    2567 proxier.go:347] userspace syncProxyRules took 101.711515ms\nI0305 01:19:51.180448    2567 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:19:51.180472    2567 proxier.go:347] userspace syncProxyRules took 72.572447ms\nI0305 01:20:02.429495    2567 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0305 01:20:02.429544    2567 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Mar 05 01:20:32.120 E ns/openshift-sdn pod/sdn-df46w node/ip-10-0-142-46.us-west-1.compute.internal container=sdn container exited with code 255 (Error): local port "nodePort for e2e-k8s-service-lb-available-2552/service-test:" (:32017/tcp)\nI0305 01:19:56.180768    6154 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:31104/tcp)\nI0305 01:19:56.217472    6154 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 32683\nI0305 01:19:56.219898    6154 pod.go:539] CNI_DEL openshift-ingress/router-default-768d7bb5dd-48hbn\nI0305 01:19:56.227063    6154 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0305 01:19:56.227100    6154 cmd.go:173] openshift-sdn network plugin registering startup\nI0305 01:19:56.227251    6154 cmd.go:177] openshift-sdn network plugin ready\nI0305 01:20:06.666025    6154 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.2:6443 10.129.0.78:6443 10.130.0.3:6443]\nI0305 01:20:06.694838    6154 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.78:6443 10.130.0.3:6443]\nI0305 01:20:06.694875    6154 roundrobin.go:217] Delete endpoint 10.128.0.2:6443 for service "openshift-multus/multus-admission-controller:"\nI0305 01:20:06.926174    6154 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:20:06.926196    6154 proxier.go:347] userspace syncProxyRules took 70.207295ms\nI0305 01:20:07.160309    6154 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:20:07.160337    6154 proxier.go:347] userspace syncProxyRules took 69.426568ms\nI0305 01:20:25.402150    6154 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0305 01:20:30.999031    6154 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0305 01:20:30.999071    6154 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Mar 05 01:20:37.491 E ns/openshift-multus pod/multus-admission-controller-zdb95 node/ip-10-0-147-82.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Mar 05 01:20:57.225 E ns/openshift-sdn pod/sdn-n758s node/ip-10-0-149-63.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 29.0.78:6443 10.130.0.3:6443]\nI0305 01:20:06.695450    2823 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.129.0.78:6443 10.130.0.3:6443]\nI0305 01:20:06.695494    2823 roundrobin.go:217] Delete endpoint 10.128.0.2:6443 for service "openshift-multus/multus-admission-controller:"\nI0305 01:20:06.931836    2823 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:20:06.931861    2823 proxier.go:347] userspace syncProxyRules took 72.804617ms\nI0305 01:20:07.180274    2823 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:20:07.180297    2823 proxier.go:347] userspace syncProxyRules took 72.158683ms\nI0305 01:20:37.445289    2823 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:20:37.445313    2823 proxier.go:347] userspace syncProxyRules took 87.633555ms\nI0305 01:20:48.505709    2823 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.67:6443 10.129.0.78:6443 10.130.0.3:6443]\nI0305 01:20:48.519865    2823 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.67:6443 10.129.0.78:6443]\nI0305 01:20:48.519900    2823 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0305 01:20:48.767467    2823 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:20:48.767491    2823 proxier.go:347] userspace syncProxyRules took 71.954382ms\nI0305 01:20:49.013362    2823 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:20:49.013389    2823 proxier.go:347] userspace syncProxyRules took 71.292096ms\nI0305 01:20:56.583761    2823 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0305 01:20:56.583804    2823 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Mar 05 01:21:16.963 E ns/openshift-sdn pod/sdn-czdjm node/ip-10-0-130-16.us-west-1.compute.internal container=sdn container exited with code 255 (Error): 305 01:20:33.451429   12165 proxier.go:347] userspace syncProxyRules took 271.212856ms\nI0305 01:20:33.528753   12165 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-2552/service-test:" (:32017/tcp)\nI0305 01:20:33.528950   12165 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:32257/tcp)\nI0305 01:20:33.529068   12165 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:31104/tcp)\nI0305 01:20:33.572217   12165 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 32683\nI0305 01:20:33.786175   12165 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0305 01:20:33.786211   12165 cmd.go:173] openshift-sdn network plugin registering startup\nI0305 01:20:33.786333   12165 cmd.go:177] openshift-sdn network plugin ready\nI0305 01:20:48.505006   12165 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.67:6443 10.129.0.78:6443 10.130.0.3:6443]\nI0305 01:20:48.517967   12165 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.67:6443 10.129.0.78:6443]\nI0305 01:20:48.518007   12165 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:"\nI0305 01:20:48.800644   12165 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:20:48.800673   12165 proxier.go:347] userspace syncProxyRules took 94.688777ms\nI0305 01:20:49.087614   12165 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:20:49.087643   12165 proxier.go:347] userspace syncProxyRules took 73.319592ms\nI0305 01:21:16.092125   12165 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0305 01:21:16.092181   12165 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Mar 05 01:21:28.202 E ns/openshift-multus pod/multus-h8ft2 node/ip-10-0-142-46.us-west-1.compute.internal container=kube-multus container exited with code 137 (Error): 
Mar 05 01:21:40.935 E ns/openshift-sdn pod/sdn-vvhgk node/ip-10-0-141-142.us-west-1.compute.internal container=sdn container exited with code 255 (Error): ns/dns-default:dns -> 172.30.0.10\nI0305 01:21:01.277086    5882 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:21:01.277115    5882 proxier.go:347] userspace syncProxyRules took 220.77732ms\nI0305 01:21:01.297714    5882 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:21:01.297740    5882 proxier.go:347] userspace syncProxyRules took 241.224715ms\nI0305 01:21:01.367831    5882 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-2552/service-test:" (:32017/tcp)\nI0305 01:21:01.368005    5882 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:31104/tcp)\nI0305 01:21:01.368102    5882 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:32257/tcp)\nI0305 01:21:01.399686    5882 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 32683\nI0305 01:21:01.583624    5882 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0305 01:21:01.583662    5882 cmd.go:173] openshift-sdn network plugin registering startup\nI0305 01:21:01.583792    5882 cmd.go:177] openshift-sdn network plugin ready\nI0305 01:21:30.582421    5882 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0305 01:21:31.191398    5882 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:21:31.191424    5882 proxier.go:347] userspace syncProxyRules took 69.815495ms\nI0305 01:21:37.336262    5882 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller: to [10.128.0.67:6443 10.129.0.78:6443 10.130.0.67:6443]\nI0305 01:21:37.591782    5882 proxier.go:368] userspace proxy: processing 0 service events\nI0305 01:21:37.591813    5882 proxier.go:347] userspace syncProxyRules took 89.765012ms\nF0305 01:21:40.440086    5882 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Mar 05 01:22:17.664 E ns/openshift-multus pod/multus-9c86p node/ip-10-0-149-63.us-west-1.compute.internal container=kube-multus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:23:04.133 E ns/openshift-multus pod/multus-rjd5b node/ip-10-0-147-82.us-west-1.compute.internal container=kube-multus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:23:52.204 E ns/openshift-multus pod/multus-8p4f9 node/ip-10-0-141-142.us-west-1.compute.internal container=kube-multus container exited with code 137 (OOMKilled): 
Mar 05 01:24:17.616 E ns/openshift-machine-config-operator pod/machine-config-operator-5cc68fc9bc-446jq node/ip-10-0-130-16.us-west-1.compute.internal container=machine-config-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:26:12.781 E ns/openshift-machine-config-operator pod/machine-config-daemon-xcfxp node/ip-10-0-147-82.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:26:16.511 E ns/openshift-machine-config-operator pod/machine-config-daemon-kxs89 node/ip-10-0-141-142.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:26:28.042 E ns/openshift-machine-config-operator pod/machine-config-daemon-5w24t node/ip-10-0-130-16.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:26:37.476 E ns/openshift-machine-config-operator pod/machine-config-daemon-k44sq node/ip-10-0-137-81.us-west-1.compute.internal container=machine-config-daemon container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:26:37.476 E ns/openshift-machine-config-operator pod/machine-config-daemon-k44sq node/ip-10-0-137-81.us-west-1.compute.internal container=oauth-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:26:49.244 E ns/openshift-machine-config-operator pod/machine-config-daemon-w8c8h node/ip-10-0-149-63.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:26:57.881 E ns/openshift-machine-config-operator pod/machine-config-daemon-m9q4c node/ip-10-0-142-46.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:27:07.180 E ns/openshift-machine-config-operator pod/machine-config-controller-599d648f85-rfgtz node/ip-10-0-130-16.us-west-1.compute.internal container=machine-config-controller container exited with code 2 (Error): ving resource lock openshift-machine-config-operator/machine-config-controller: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config-controller: unexpected EOF\nI0305 00:59:21.590539       1 node_controller.go:452] Pool worker: node ip-10-0-142-46.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-d579b28c4da87fe97d3cca400dc010ea\nI0305 00:59:21.590695       1 node_controller.go:452] Pool worker: node ip-10-0-142-46.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-d579b28c4da87fe97d3cca400dc010ea\nI0305 00:59:21.590746       1 node_controller.go:452] Pool worker: node ip-10-0-142-46.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0305 00:59:44.025935       1 node_controller.go:452] Pool worker: node ip-10-0-149-63.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-d579b28c4da87fe97d3cca400dc010ea\nI0305 00:59:44.025971       1 node_controller.go:452] Pool worker: node ip-10-0-149-63.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-d579b28c4da87fe97d3cca400dc010ea\nI0305 00:59:44.025983       1 node_controller.go:452] Pool worker: node ip-10-0-149-63.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0305 01:00:16.785365       1 node_controller.go:452] Pool worker: node ip-10-0-141-142.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-d579b28c4da87fe97d3cca400dc010ea\nI0305 01:00:16.785562       1 node_controller.go:452] Pool worker: node ip-10-0-141-142.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-d579b28c4da87fe97d3cca400dc010ea\nI0305 01:00:16.785587       1 node_controller.go:452] Pool worker: node ip-10-0-141-142.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\n
Mar 05 01:28:47.516 E ns/openshift-machine-config-operator pod/machine-config-server-qdjmz node/ip-10-0-130-16.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0305 00:54:16.380077       1 start.go:38] Version: machine-config-daemon-4.5.0-202003042001-4-g09266642-dirty (092666426506d8d2b71ef0b17a7af0e955398d8f)\nI0305 00:54:16.381188       1 api.go:51] Launching server on :22624\nI0305 00:54:16.381254       1 api.go:51] Launching server on :22623\nI0305 00:55:20.650064       1 api.go:97] Pool worker requested by 10.0.157.223:26829\nI0305 00:56:38.699304       1 api.go:97] Pool worker requested by 10.0.157.223:19429\n
Mar 05 01:28:53.493 E ns/openshift-machine-config-operator pod/machine-config-server-xmzzv node/ip-10-0-147-82.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0305 00:54:14.265222       1 start.go:38] Version: machine-config-daemon-4.5.0-202003042001-4-g09266642-dirty (092666426506d8d2b71ef0b17a7af0e955398d8f)\nI0305 00:54:14.266244       1 api.go:51] Launching server on :22624\nI0305 00:54:14.266316       1 api.go:51] Launching server on :22623\nI0305 00:55:18.726159       1 api.go:97] Pool worker requested by 10.0.157.223:26793\n
Mar 05 01:28:54.600 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-5ff7d856b8-hjmvc node/ip-10-0-149-63.us-west-1.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Mar 05 01:28:54.685 E ns/openshift-marketplace pod/redhat-operators-8c947c655-z9f2g node/ip-10-0-149-63.us-west-1.compute.internal container=redhat-operators container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:28:54.713 E ns/openshift-marketplace pod/redhat-marketplace-56c7f76897-ws66b node/ip-10-0-149-63.us-west-1.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Mar 05 01:28:55.710 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-149-63.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/03/05 01:17:36 Watching directory: "/etc/alertmanager/config"\n
Mar 05 01:28:55.710 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-149-63.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/03/05 01:17:36 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/05 01:17:36 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/05 01:17:36 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/05 01:17:36 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/03/05 01:17:36 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/05 01:17:36 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/05 01:17:36 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/05 01:17:36 http.go:107: HTTPS: listening on [::]:9095\nI0305 01:17:36.517110       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Mar 05 01:28:55.742 E ns/openshift-monitoring pod/prometheus-adapter-79474b56cf-b58zs node/ip-10-0-149-63.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0305 01:16:41.577747       1 adapter.go:93] successfully using in-cluster auth\nI0305 01:16:41.965624       1 secure_serving.go:116] Serving securely on [::]:6443\n
Mar 05 01:28:55.847 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-63.us-west-1.compute.internal container=kube-rbac-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:28:55.847 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-63.us-west-1.compute.internal container=prom-label-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:28:55.847 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-63.us-west-1.compute.internal container=prometheus container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:28:55.847 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-63.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:28:55.847 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-63.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:28:55.847 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-63.us-west-1.compute.internal container=thanos-sidecar container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:28:55.847 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-63.us-west-1.compute.internal container=prometheus-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:28:56.160 E ns/openshift-apiserver pod/apiserver-6646bf8b4d-dgmb8 node/ip-10-0-130-16.us-west-1.compute.internal container=openshift-apiserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:28:59.137 E ns/openshift-operator-lifecycle-manager pod/packageserver-7d8cfd5664-wg76j node/ip-10-0-137-81.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:28:59.222 E ns/openshift-insights pod/insights-operator-64cd9bcd65-rfbd4 node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 2 (Error): s/2.15.2 10.131.0.25:48742]\nI0305 01:25:29.958066       1 httplog.go:90] GET /metrics: (9.013057ms) 200 [Prometheus/2.15.2 10.128.2.34:55300]\nI0305 01:25:33.256852       1 httplog.go:90] GET /metrics: (1.723877ms) 200 [Prometheus/2.15.2 10.131.0.25:48742]\nI0305 01:25:59.967203       1 httplog.go:90] GET /metrics: (18.210608ms) 200 [Prometheus/2.15.2 10.128.2.34:55300]\nI0305 01:26:03.257062       1 httplog.go:90] GET /metrics: (1.870164ms) 200 [Prometheus/2.15.2 10.131.0.25:48742]\nI0305 01:26:12.379633       1 configobserver.go:65] Refreshing configuration from cluster pull secret\nI0305 01:26:12.385692       1 configobserver.go:90] Found cloud.openshift.com token\nI0305 01:26:12.385729       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0305 01:26:12.390909       1 status.go:298] The operator is healthy\nI0305 01:26:29.958536       1 httplog.go:90] GET /metrics: (9.413275ms) 200 [Prometheus/2.15.2 10.128.2.34:55300]\nI0305 01:26:33.256741       1 httplog.go:90] GET /metrics: (1.551033ms) 200 [Prometheus/2.15.2 10.131.0.25:48742]\nI0305 01:26:59.968597       1 httplog.go:90] GET /metrics: (19.457824ms) 200 [Prometheus/2.15.2 10.128.2.34:55300]\nI0305 01:27:03.258468       1 httplog.go:90] GET /metrics: (3.015428ms) 200 [Prometheus/2.15.2 10.131.0.25:48742]\nI0305 01:27:29.958801       1 httplog.go:90] GET /metrics: (9.69134ms) 200 [Prometheus/2.15.2 10.128.2.34:55300]\nI0305 01:27:33.256965       1 httplog.go:90] GET /metrics: (1.849927ms) 200 [Prometheus/2.15.2 10.131.0.25:48742]\nI0305 01:27:59.958425       1 httplog.go:90] GET /metrics: (9.089789ms) 200 [Prometheus/2.15.2 10.128.2.34:55300]\nI0305 01:28:03.256868       1 httplog.go:90] GET /metrics: (1.795564ms) 200 [Prometheus/2.15.2 10.131.0.25:48742]\nI0305 01:28:12.384005       1 status.go:298] The operator is healthy\nI0305 01:28:29.961408       1 httplog.go:90] GET /metrics: (12.192723ms) 200 [Prometheus/2.15.2 10.128.2.34:55300]\nI0305 01:28:33.256987       1 httplog.go:90] GET /metrics: (1.869787ms) 200 [Prometheus/2.15.2 10.131.0.25:48742]\n
Mar 05 01:29:01.227 E ns/openshift-service-ca-operator pod/service-ca-operator-85fdbb449b-m5ln7 node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 255 (Error): 
Mar 05 01:29:01.360 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-69c465b87f-lkrz7 node/ip-10-0-130-16.us-west-1.compute.internal container=operator container exited with code 255 (Error): orkload_controller.go:181] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0305 01:28:22.455793       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0305 01:28:28.704858       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0305 01:28:28.704894       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0305 01:28:28.706120       1 httplog.go:90] GET /metrics: (5.461849ms) 200 [Prometheus/2.15.2 10.131.0.25:48096]\nI0305 01:28:32.468712       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0305 01:28:41.685198       1 workload_controller.go:329] No service bindings found, nothing to delete.\nI0305 01:28:41.697778       1 workload_controller.go:181] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0305 01:28:42.480155       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0305 01:28:43.685364       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0305 01:28:43.685421       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0305 01:28:43.686589       1 httplog.go:90] GET /metrics: (5.529901ms) 200 [Prometheus/2.15.2 10.128.2.34:33158]\nI0305 01:28:52.493165       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0305 01:28:59.501068       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 33 items received\nI0305 01:28:59.876533       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0305 01:28:59.876762       1 leaderelection.go:66] leaderelection lost\n
Mar 05 01:29:21.460 E ns/openshift-console pod/console-844786c8db-h8qpv node/ip-10-0-130-16.us-west-1.compute.internal container=console container exited with code 2 (Error): 2020-03-05T01:17:50Z cmd/main: cookies are secure!\n2020-03-05T01:17:50Z cmd/main: Binding to [::]:8443...\n2020-03-05T01:17:50Z cmd/main: using TLS\n
Mar 05 01:29:23.212 E clusteroperator/ingress changed Degraded to True: IngressControllersDegraded: Some ingresscontrollers are degraded: default
Mar 05 01:29:34.061 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-141-142.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-03-05T01:29:21.700Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-03-05T01:29:21.703Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-03-05T01:29:21.704Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-03-05T01:29:21.704Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-03-05T01:29:21.705Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-03-05T01:29:21.705Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-03-05T01:29:21.705Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-03-05T01:29:21.705Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-03-05T01:29:21.705Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-03-05T01:29:21.705Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-03-05T01:29:21.705Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-03-05T01:29:21.705Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-03-05T01:29:21.705Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-03-05T01:29:21.705Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-03-05T01:29:21.705Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-03-05T01:29:21.706Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-03-05
Mar 05 01:30:59.681 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Mar 05 01:31:31.138 E ns/openshift-cluster-node-tuning-operator pod/tuned-jgzrp node/ip-10-0-149-63.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 175    2918 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0305 01:17:57.400537    2918 tuned.go:393] getting recommended profile...\nI0305 01:17:57.577148    2918 tuned.go:421] active profile () != recommended profile (openshift-node)\nI0305 01:17:57.577311    2918 tuned.go:286] starting tuned...\n2020-03-05 01:17:57,723 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-03-05 01:17:57,733 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-03-05 01:17:57,733 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-03-05 01:17:57,734 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-03-05 01:17:57,735 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-03-05 01:17:58,002 INFO     tuned.daemon.controller: starting controller\n2020-03-05 01:17:58,002 INFO     tuned.daemon.daemon: starting tuning\n2020-03-05 01:17:58,021 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-03-05 01:17:58,023 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-03-05 01:17:58,029 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-03-05 01:17:58,031 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-03-05 01:17:58,034 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-03-05 01:17:58,256 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-03-05 01:17:58,270 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n2020-03-05 01:29:43,658 INFO     tuned.daemon.controller: terminating controller\n2020-03-05 01:29:43,659 INFO     tuned.daemon.daemon: stopping tuning\nI0305 01:29:43.658554    2918 tuned.go:115] received signal: terminated\nI0305 01:29:43.658614    2918 tuned.go:327] sending TERM to PID 3023\n
Mar 05 01:31:31.152 E ns/openshift-monitoring pod/node-exporter-zcv72 node/ip-10-0-149-63.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:28:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:28:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:28:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:28:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:29:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:29:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:29:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:31:31.168 E ns/openshift-sdn pod/ovs-rnkkm node/ip-10-0-149-63.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): #427: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:28:54.621Z|00086|connmgr|INFO|br0<->unix#430: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:28:54.659Z|00087|bridge|INFO|bridge br0: deleted interface veth4f251dac on port 49\n2020-03-05T01:28:54.721Z|00088|connmgr|INFO|br0<->unix#433: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:28:54.767Z|00089|connmgr|INFO|br0<->unix#436: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:28:54.794Z|00090|bridge|INFO|bridge br0: deleted interface veth5637be8e on port 26\n2020-03-05T01:28:54.833Z|00091|connmgr|INFO|br0<->unix#439: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:28:54.906Z|00092|connmgr|INFO|br0<->unix#442: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:28:54.935Z|00093|bridge|INFO|bridge br0: deleted interface veth223937fe on port 25\n2020-03-05T01:28:55.010Z|00094|connmgr|INFO|br0<->unix#445: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:28:55.044Z|00095|connmgr|INFO|br0<->unix#448: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:28:55.069Z|00096|bridge|INFO|bridge br0: deleted interface veth4c80c608 on port 47\n2020-03-05T01:29:38.934Z|00097|connmgr|INFO|br0<->unix#485: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:29:38.961Z|00098|connmgr|INFO|br0<->unix#488: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:29:38.983Z|00099|bridge|INFO|bridge br0: deleted interface veth77cf0ab0 on port 24\n2020-03-05T01:29:40.612Z|00100|connmgr|INFO|br0<->unix#491: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:29:40.639Z|00101|connmgr|INFO|br0<->unix#494: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:29:40.660Z|00102|bridge|INFO|bridge br0: deleted interface veth0b4a6fd5 on port 54\n2020-03-05T01:29:40.654Z|00011|jsonrpc|WARN|unix#417: receive error: Connection reset by peer\n2020-03-05T01:29:40.654Z|00012|reconnect|WARN|unix#417: connection dropped (Connection reset by peer)\ninfo: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Mar 05 01:31:31.208 E ns/openshift-multus pod/multus-9jx99 node/ip-10-0-149-63.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 05 01:31:31.218 E ns/openshift-machine-config-operator pod/machine-config-daemon-6hwns node/ip-10-0-149-63.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:31:35.908 E ns/openshift-multus pod/multus-9jx99 node/ip-10-0-149-63.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Mar 05 01:31:40.671 E ns/openshift-machine-config-operator pod/machine-config-daemon-6hwns node/ip-10-0-149-63.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 05 01:31:50.300 E ns/openshift-cluster-node-tuning-operator pod/tuned-xp4z5 node/ip-10-0-130-16.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 21] tuned "rendered" added\nI0305 01:17:12.123382    1514 tuned.go:219] extracting tuned profiles\nI0305 01:17:12.129002    1514 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0305 01:17:13.105980    1514 tuned.go:393] getting recommended profile...\nI0305 01:17:13.376160    1514 tuned.go:421] active profile () != recommended profile (openshift-control-plane)\nI0305 01:17:13.376298    1514 tuned.go:286] starting tuned...\n2020-03-05 01:17:13,568 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-03-05 01:17:13,577 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-03-05 01:17:13,577 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-03-05 01:17:13,578 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-03-05 01:17:13,579 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-03-05 01:17:13,625 INFO     tuned.daemon.controller: starting controller\n2020-03-05 01:17:13,625 INFO     tuned.daemon.daemon: starting tuning\n2020-03-05 01:17:13,645 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-03-05 01:17:13,646 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-03-05 01:17:13,656 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-03-05 01:17:13,658 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-03-05 01:17:13,663 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-03-05 01:17:13,841 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-03-05 01:17:13,852 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0305 01:29:31.469306    1514 tuned.go:115] received signal: terminated\nI0305 01:29:31.469476    1514 tuned.go:327] sending TERM to PID 1775\n
Mar 05 01:31:50.344 E ns/openshift-controller-manager pod/controller-manager-ccrsh node/ip-10-0-130-16.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): I0305 01:17:16.543115       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (unknown)\nI0305 01:17:16.544665       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-p6dgihk0/stable@sha256:a3d84f419db9032b07494e17bf5f6ee7a928c92e5c6ff959deef9dc128b865cc"\nI0305 01:17:16.544686       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-p6dgihk0/stable@sha256:471891b26e981d2ed9c87cdd306bc028abe62b760a7af413bd9c05389c4ea5a4"\nI0305 01:17:16.544752       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0305 01:17:16.544900       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Mar 05 01:31:50.415 E ns/openshift-monitoring pod/node-exporter-lppbl node/ip-10-0-130-16.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:28:22Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:28:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:28:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:28:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:28:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:29:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:29:22Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:31:50.434 E ns/openshift-sdn pod/sdn-controller-rklqv node/ip-10-0-130-16.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0305 01:19:34.576011       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Mar 05 01:31:50.448 E ns/openshift-multus pod/multus-admission-controller-c4tdm node/ip-10-0-130-16.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Mar 05 01:31:50.464 E ns/openshift-multus pod/multus-dpqh2 node/ip-10-0-130-16.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 05 01:31:50.553 E ns/openshift-sdn pod/ovs-jwndq node/ip-10-0-130-16.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): 020-03-05T01:28:59.890Z|00104|connmgr|INFO|br0<->unix#444: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:29:00.121Z|00105|connmgr|INFO|br0<->unix#447: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:29:00.229Z|00106|bridge|INFO|bridge br0: deleted interface veth427449d6 on port 72\n2020-03-05T01:29:00.742Z|00011|jsonrpc|WARN|unix#385: send error: Broken pipe\n2020-03-05T01:29:00.743Z|00012|reconnect|WARN|unix#385: connection dropped (Broken pipe)\n2020-03-05T01:29:00.914Z|00013|jsonrpc|WARN|unix#389: send error: Broken pipe\n2020-03-05T01:29:00.914Z|00014|reconnect|WARN|unix#389: connection dropped (Broken pipe)\n2020-03-05T01:29:01.108Z|00015|jsonrpc|WARN|unix#395: receive error: Connection reset by peer\n2020-03-05T01:29:01.108Z|00016|reconnect|WARN|unix#395: connection dropped (Connection reset by peer)\n2020-03-05T01:29:00.661Z|00107|connmgr|INFO|br0<->unix#451: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:29:00.709Z|00108|connmgr|INFO|br0<->unix#454: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:29:00.752Z|00109|bridge|INFO|bridge br0: deleted interface veth24c8baf3 on port 71\n2020-03-05T01:29:00.816Z|00110|connmgr|INFO|br0<->unix#457: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:29:00.894Z|00111|connmgr|INFO|br0<->unix#460: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:29:00.940Z|00112|bridge|INFO|bridge br0: deleted interface veth4cf5fd30 on port 65\n2020-03-05T01:29:01.009Z|00113|connmgr|INFO|br0<->unix#463: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:29:01.084Z|00114|connmgr|INFO|br0<->unix#466: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:29:01.120Z|00115|bridge|INFO|bridge br0: deleted interface vethbceeeef9 on port 62\n2020-03-05T01:29:20.837Z|00116|connmgr|INFO|br0<->unix#485: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:29:20.870Z|00117|connmgr|INFO|br0<->unix#488: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:29:20.895Z|00118|bridge|INFO|bridge br0: deleted interface veth4a3e3e55 on port 78\ninfo: Saving flows ...\nTerminated\n
Mar 05 01:31:50.604 E ns/openshift-machine-config-operator pod/machine-config-daemon-hkl7d node/ip-10-0-130-16.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:31:50.619 E ns/openshift-machine-config-operator pod/machine-config-server-924r9 node/ip-10-0-130-16.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0305 01:28:51.663773       1 start.go:38] Version: machine-config-daemon-4.5.0-202003042001-4-g09266642-dirty (092666426506d8d2b71ef0b17a7af0e955398d8f)\nI0305 01:28:51.664786       1 api.go:51] Launching server on :22624\nI0305 01:28:51.664900       1 api.go:51] Launching server on :22623\n
Mar 05 01:31:50.688 E ns/openshift-etcd pod/etcd-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-03-05 01:12:43.906470 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-130-16.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-130-16.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-03-05 01:12:43.908127 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-03-05 01:12:43.908752 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-130-16.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-130-16.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-03-05 01:12:43.911568 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/03/05 01:12:43 grpc: addrConn.createTransport failed to connect to {https://etcd-0.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.130.16:9978: connect: connection refused". Reconnecting...\nWARNING: 2020/03/05 01:12:44 grpc: addrConn.createTransport failed to connect to {https://etcd-0.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.130.16:9978: connect: connection refused". Reconnecting...\n
Mar 05 01:31:50.702 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 1 (Error): .default.svc.cluster.local]\nI0305 01:24:26.219528       1 servicehostname.go:40] syncing servicenetwork hostnames: [172.30.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openshift openshift.default openshift.default.svc openshift.default.svc.cluster.local]\nI0305 01:24:26.250566       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0305 01:29:31.546072       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:29:31.546725       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0305 01:29:31.546746       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0305 01:29:31.546756       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0305 01:29:31.546772       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0305 01:29:31.546783       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0305 01:29:31.546794       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0305 01:29:31.546810       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0305 01:29:31.546822       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0305 01:29:31.546831       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0305 01:29:31.546844       1 certrotationcontroller.go:560] Shutting down CertRotation\nI0305 01:29:31.546856       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0305 01:29:31.546863       1 cabundlesyncer.go:86] CA bundle controller shut down\n
Mar 05 01:31:50.702 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0305 01:13:28.233081       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Mar 05 01:31:50.702 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0305 01:29:17.298638       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:29:17.299005       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0305 01:29:27.313051       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:29:27.314150       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Mar 05 01:31:50.723 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=cluster-policy-controller container exited with code 1 (Error): itor quota for resource "tuned.openshift.io/v1, Resource=tuneds", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=alertmanagers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=alertmanagers", couldn't start monitor for resource "operators.coreos.com/v1, Resource=operatorsources": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorsources", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinesets": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinesets", couldn't start monitor for resource "authorization.openshift.io/v1, Resource=rolebindingrestrictions": unable to monitor quota for resource "authorization.openshift.io/v1, Resource=rolebindingrestrictions"]\nI0305 01:13:40.306852       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0305 01:13:40.306882       1 policy_controller.go:147] Started Origin Controllers\nI0305 01:13:40.307526       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0305 01:13:40.307557       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0305 01:13:40.310819       1 resource_quota_monitor.go:303] QuotaMonitor running\nI0305 01:13:40.460754       1 shared_informer.go:204] Caches are synced for resource quota \nW0305 01:28:54.038597       1 reflector.go:326] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 701; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:28:54.038712       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 581; INTERNAL_ERROR") has prevented the request from succeeding\n
Mar 05 01:31:50.723 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:28:56.725510       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:28:56.725947       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:28:58.887942       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:28:58.888326       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:29:06.737837       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:29:06.738287       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:29:08.899661       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:29:08.900109       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:29:16.750850       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:29:16.751177       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:29:18.910304       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:29:18.910725       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:29:26.762707       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:29:26.763076       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:29:28.920720       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:29:28.921226       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Mar 05 01:31:50.723 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-controller-manager container exited with code 2 (Error): -cccd87fcd to 2\nI0305 01:29:07.352185       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-cccd87fcd", UID:"fa40ab19-f06a-4bfc-924f-15211af299f9", APIVersion:"apps/v1", ResourceVersion:"37037", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: packageserver-cccd87fcd-295nx\nI0305 01:29:13.154343       1 replica_set.go:597] Too many replicas for ReplicaSet openshift-operator-lifecycle-manager/packageserver-7d8cfd5664, need 0, deleting 1\nI0305 01:29:13.154397       1 replica_set.go:225] Found 6 related ReplicaSets for ReplicaSet openshift-operator-lifecycle-manager/packageserver-7d8cfd5664: packageserver-68b7ddd998, packageserver-58f947fcbc, packageserver-9f9655b8b, packageserver-68c7d5cfdf, packageserver-7d8cfd5664, packageserver-cccd87fcd\nI0305 01:29:13.154947       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver", UID:"e0a9848a-5e58-45dc-aca4-0b6f190c79ee", APIVersion:"apps/v1", ResourceVersion:"37054", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set packageserver-7d8cfd5664 to 0\nI0305 01:29:13.155468       1 controller_utils.go:603] Controller packageserver-7d8cfd5664 deleting pod openshift-operator-lifecycle-manager/packageserver-7d8cfd5664-rqsnc\nI0305 01:29:13.174042       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-operator-lifecycle-manager", Name:"packageserver-7d8cfd5664", UID:"f64331d1-5288-42c5-98f3-13e8983332e2", APIVersion:"apps/v1", ResourceVersion:"37151", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: packageserver-7d8cfd5664-rqsnc\nI0305 01:29:13.185352       1 deployment_controller.go:484] Error syncing deployment openshift-operator-lifecycle-manager/packageserver: Operation cannot be fulfilled on deployments.apps "packageserver": the object has been modified; please apply your changes to the latest version and try again\n
Mar 05 01:31:50.723 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): gFor=[localhost] issuer="cert-recovery-controller-signer@1583370813" (2020-03-05 01:13:32 +0000 UTC to 2020-04-04 01:13:33 +0000 UTC (now=2020-03-05 01:13:35.88611779 +0000 UTC))\nI0305 01:13:35.900900       1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1583370815" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583370815" (2020-03-05 00:13:35 +0000 UTC to 2021-03-05 00:13:35 +0000 UTC (now=2020-03-05 01:13:35.900870317 +0000 UTC))\nI0305 01:16:07.791698       1 leaderelection.go:252] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock\nI0305 01:16:07.792093       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"29e7b6b7-7de8-4bfe-a7de-30172b4c811e", APIVersion:"v1", ResourceVersion:"24937", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' d9e5c7a2-5e07-48ab-99ab-4b8df59a7cb2 became leader\nI0305 01:16:07.796712       1 csrcontroller.go:81] Starting CSR controller\nI0305 01:16:07.796783       1 shared_informer.go:197] Waiting for caches to sync for CSRController\nI0305 01:16:07.997515       1 shared_informer.go:204] Caches are synced for CSRController \nI0305 01:29:31.456789       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:29:31.457236       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0305 01:29:31.457261       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0305 01:29:31.457291       1 dynamic_serving_content.go:144] Shutting down serving-cert::/tmp/serving-cert-593078134/tls.crt::/tmp/serving-cert-593078134/tls.key\nF0305 01:29:31.457317       1 builder.go:209] server exited\nI0305 01:29:31.463146       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\n
Mar 05 01:31:50.740 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-130-16.us-west-1.compute.internal node/ip-10-0-130-16.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): 79] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-03-05 00:37:36 +0000 UTC to 2030-03-03 00:37:36 +0000 UTC (now=2020-03-05 01:14:50.397566132 +0000 UTC))\nI0305 01:14:50.397613       1 tlsconfig.go:179] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1583369606" [] issuer="kubelet-signer" (2020-03-05 00:53:25 +0000 UTC to 2020-03-06 00:37:41 +0000 UTC (now=2020-03-05 01:14:50.397598257 +0000 UTC))\nI0305 01:14:50.397654       1 tlsconfig.go:179] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-03-05 00:37:39 +0000 UTC to 2020-03-06 00:37:39 +0000 UTC (now=2020-03-05 01:14:50.397633695 +0000 UTC))\nI0305 01:14:50.398172       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "scheduler.openshift-kube-scheduler.svc" [serving] validServingFor=[scheduler.openshift-kube-scheduler.svc,scheduler.openshift-kube-scheduler.svc.cluster.local] issuer="openshift-service-serving-signer@1583369605" (2020-03-05 00:53:37 +0000 UTC to 2022-03-05 00:53:38 +0000 UTC (now=2020-03-05 01:14:50.398150069 +0000 UTC))\nI0305 01:14:50.403998       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1583370890" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583370889" (2020-03-05 00:14:49 +0000 UTC to 2021-03-05 00:14:49 +0000 UTC (now=2020-03-05 01:14:50.403981982 +0000 UTC))\n
Mar 05 01:31:56.807 E ns/openshift-marketplace pod/community-operators-686f44876d-xq6zv node/ip-10-0-141-142.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Mar 05 01:31:57.904 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-142.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/03/05 01:29:13 Watching directory: "/etc/alertmanager/config"\n
Mar 05 01:31:57.904 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-141-142.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/03/05 01:29:14 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/05 01:29:14 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/05 01:29:14 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/05 01:29:14 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/03/05 01:29:14 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/05 01:29:14 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/05 01:29:14 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0305 01:29:14.204037       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/03/05 01:29:14 http.go:107: HTTPS: listening on [::]:9095\n
Mar 05 01:31:57.922 E ns/openshift-csi-snapshot-controller pod/csi-snapshot-controller-5ff7d856b8-59bx5 node/ip-10-0-141-142.us-west-1.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Mar 05 01:31:57.952 E ns/openshift-marketplace pod/redhat-marketplace-56c7f76897-xqhhb node/ip-10-0-141-142.us-west-1.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Mar 05 01:31:57.966 E ns/openshift-multus pod/multus-dpqh2 node/ip-10-0-130-16.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Mar 05 01:31:58.009 E ns/openshift-monitoring pod/prometheus-adapter-79474b56cf-tqsps node/ip-10-0-141-142.us-west-1.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0305 01:16:36.003718       1 adapter.go:93] successfully using in-cluster auth\nI0305 01:16:36.882440       1 secure_serving.go:116] Serving securely on [::]:6443\n
Mar 05 01:31:58.051 E ns/openshift-marketplace pod/certified-operators-64667d9d4f-4fhfv node/ip-10-0-141-142.us-west-1.compute.internal container=certified-operators container exited with code 2 (Error): 
Mar 05 01:31:58.079 E ns/openshift-monitoring pod/kube-state-metrics-c975756bc-2r59n node/ip-10-0-141-142.us-west-1.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Mar 05 01:31:58.164 E ns/openshift-monitoring pod/grafana-dfd5bd48c-g7lzg node/ip-10-0-141-142.us-west-1.compute.internal container=grafana container exited with code 1 (Error): 
Mar 05 01:31:58.164 E ns/openshift-monitoring pod/grafana-dfd5bd48c-g7lzg node/ip-10-0-141-142.us-west-1.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Mar 05 01:31:58.892 E ns/openshift-marketplace pod/redhat-operators-8c947c655-jz2nn node/ip-10-0-141-142.us-west-1.compute.internal container=redhat-operators container exited with code 2 (Error): 
Mar 05 01:31:58.919 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-141-142.us-west-1.compute.internal container=config-reloader container exited with code 2 (Error): 2020/03/05 01:16:55 Watching directory: "/etc/alertmanager/config"\n
Mar 05 01:31:58.919 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-141-142.us-west-1.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/03/05 01:16:55 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/05 01:16:55 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/05 01:16:55 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/05 01:16:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/03/05 01:16:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/05 01:16:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/03/05 01:16:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0305 01:16:55.448725       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/03/05 01:16:55 http.go:107: HTTPS: listening on [::]:9095\n
Mar 05 01:31:58.982 E ns/openshift-csi-snapshot-controller-operator pod/csi-snapshot-controller-operator-54db9fcf4d-bws9s node/ip-10-0-141-142.us-west-1.compute.internal container=operator container exited with code 255 (Error): ller diff {"status":{"conditions":[{"lastTransitionTime":"2020-03-05T00:59:31Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-03-05T01:31:47Z","message":"Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods","reason":"_AsExpected","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-03-05T01:31:47Z","message":"Available: Waiting for Deployment to deploy csi-snapshot-controller pods","reason":"_AsExpected","status":"False","type":"Available"},{"lastTransitionTime":"2020-03-05T00:59:34Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0305 01:31:47.223978       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-csi-snapshot-controller-operator", Name:"csi-snapshot-controller-operator", UID:"f849783f-2caa-4688-a453-d28007a9c9df", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods"),Available changed from True to False ("Available: Waiting for Deployment to deploy csi-snapshot-controller pods")\nI0305 01:31:47.254323       1 operator.go:147] Finished syncing operator at 52.683117ms\nI0305 01:31:47.254381       1 operator.go:145] Starting syncing operator at 2020-03-05 01:31:47.254374746 +0000 UTC m=+905.444429202\nI0305 01:31:47.296207       1 operator.go:147] Finished syncing operator at 41.819539ms\nI0305 01:31:48.516732       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:31:48.517374       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0305 01:31:48.517398       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0305 01:31:48.517412       1 logging_controller.go:93] Shutting down LogLevelController\nF0305 01:31:48.517489       1 builder.go:243] stopped\n
Mar 05 01:31:59.091 E ns/openshift-kube-storage-version-migrator pod/migrator-74b545d94f-hmzvc node/ip-10-0-141-142.us-west-1.compute.internal container=migrator container exited with code 2 (Error): 
Mar 05 01:32:00.235 E ns/openshift-multus pod/multus-dpqh2 node/ip-10-0-130-16.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Mar 05 01:32:02.344 E ns/openshift-machine-config-operator pod/machine-config-daemon-hkl7d node/ip-10-0-130-16.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 05 01:32:18.301 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-149-63.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-03-05T01:32:15.745Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-03-05T01:32:15.747Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-03-05T01:32:15.749Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-03-05T01:32:15.750Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-03-05T01:32:15.750Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-03-05T01:32:15.750Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-03-05T01:32:15.751Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-03-05T01:32:15.751Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-03-05T01:32:15.751Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-03-05T01:32:15.751Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-03-05T01:32:15.751Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-03-05T01:32:15.751Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-03-05T01:32:15.751Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-03-05T01:32:15.751Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-03-05T01:32:15.751Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-03-05T01:32:15.751Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-03-05
Mar 05 01:32:20.525 E ns/openshift-authentication-operator pod/authentication-operator-56bc546ccf-zv4g9 node/ip-10-0-137-81.us-west-1.compute.internal container=operator container exited with code 255 (Error): ":"Degraded"},{"lastTransitionTime":"2020-03-05T01:17:34Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-03-05T01:06:33Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-03-05T00:53:24Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0305 01:30:46.062587       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"32e64b6d-577d-4197-917e-5f1ede191d8c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteStatusDegraded: Get https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift: http2: server sent GOAWAY and closed the connection; LastStreamID=3265, ErrCode=NO_ERROR, debug=\"\"" to ""\nI0305 01:32:17.427357       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:32:17.443940       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0305 01:32:17.443975       1 controller.go:70] Shutting down AuthenticationOperator2\nI0305 01:32:17.444000       1 controller.go:215] Shutting down RouterCertsDomainValidationController\nI0305 01:32:17.444017       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nI0305 01:32:17.444033       1 management_state_controller.go:112] Shutting down management-state-controller-authentication\nI0305 01:32:17.444050       1 logging_controller.go:93] Shutting down LogLevelController\nI0305 01:32:17.444068       1 unsupportedconfigoverrides_controller.go:162] Shutting down UnsupportedConfigOverridesController\nI0305 01:32:17.444084       1 status_controller.go:212] Shutting down StatusSyncer-authentication\nI0305 01:32:17.444103       1 ingress_state_controller.go:157] Shutting down IngressStateController\nF0305 01:32:17.446857       1 builder.go:243] stopped\n
Mar 05 01:32:21.075 E ns/openshift-operator-lifecycle-manager pod/packageserver-cccd87fcd-msgd8 node/ip-10-0-130-16.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:32:22.100 E ns/openshift-etcd-operator pod/etcd-operator-f95c5854f-nbbh9 node/ip-10-0-137-81.us-west-1.compute.internal container=operator container exited with code 255 (Error): own controller.\nI0305 01:32:17.610426       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0305 01:32:17.610452       1 targetconfigcontroller.go:269] Shutting down TargetConfigController\nI0305 01:32:17.610466       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0305 01:32:17.610479       1 clustermembercontroller.go:104] Shutting down ClusterMemberController\nI0305 01:32:17.610498       1 base_controller.go:74] Shutting down RevisionController ...\nI0305 01:32:17.610512       1 etcdcertsignercontroller.go:118] Shutting down EtcdCertSignerController\nI0305 01:32:17.610528       1 base_controller.go:74] Shutting down  ...\nI0305 01:32:17.610541       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0305 01:32:17.610556       1 base_controller.go:74] Shutting down NodeController ...\nI0305 01:32:17.610571       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0305 01:32:17.610585       1 base_controller.go:74] Shutting down PruneController ...\nI0305 01:32:17.610598       1 base_controller.go:74] Shutting down InstallerController ...\nI0305 01:32:17.610612       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0305 01:32:17.610625       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0305 01:32:17.610638       1 base_controller.go:74] Shutting down  ...\nI0305 01:32:17.610649       1 host_endpoints_controller.go:357] Shutting down HostEtcdEndpointsController\nI0305 01:32:17.610699       1 bootstrap_teardown_controller.go:212] Shutting down BootstrapTeardownController\nI0305 01:32:17.610712       1 status_controller.go:212] Shutting down StatusSyncer-etcd\nI0305 01:32:17.610726       1 host_endpoints_controller.go:263] Shutting down HostEtcdEndpointsController\nI0305 01:32:17.610740       1 scriptcontroller.go:144] Shutting down ScriptControllerController\nI0305 01:32:17.611740       1 etcdmemberscontroller.go:192] Shutting down EtcdMembersController\nF0305 01:32:17.611852       1 builder.go:243] stopped\n
Mar 05 01:32:23.095 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-6dbfc6cdb9-g6prm node/ip-10-0-137-81.us-west-1.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): 49 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nStaticPodsDegraded: nodes/ip-10-0-130-16.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-130-16.us-west-1.compute.internal container=\"scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-130-16.us-west-1.compute.internal pods/openshift-kube-scheduler-ip-10-0-130-16.us-west-1.compute.internal container=\"scheduler\" is not ready"\nI0305 01:32:21.527014       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:32:21.527272       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0305 01:32:21.527307       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0305 01:32:21.527326       1 base_controller.go:74] Shutting down InstallerController ...\nI0305 01:32:21.527341       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0305 01:32:21.527354       1 status_controller.go:212] Shutting down StatusSyncer-kube-scheduler\nI0305 01:32:21.527377       1 base_controller.go:74] Shutting down  ...\nI0305 01:32:21.527401       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0305 01:32:21.527418       1 base_controller.go:74] Shutting down PruneController ...\nI0305 01:32:21.527435       1 base_controller.go:74] Shutting down NodeController ...\nI0305 01:32:21.527451       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0305 01:32:21.527466       1 base_controller.go:74] Shutting down RevisionController ...\nI0305 01:32:21.527480       1 target_config_reconciler.go:124] Shutting down TargetConfigReconciler\nI0305 01:32:21.527495       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0305 01:32:21.527511       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nF0305 01:32:21.527654       1 builder.go:243] stopped\n
Mar 05 01:32:23.416 E ns/openshift-image-registry pod/cluster-image-registry-operator-548d75bff7-fhfxv node/ip-10-0-137-81.us-west-1.compute.internal container=cluster-image-registry-operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:32:23.416 E ns/openshift-image-registry pod/cluster-image-registry-operator-548d75bff7-fhfxv node/ip-10-0-137-81.us-west-1.compute.internal container=cluster-image-registry-operator-watch container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:32:54.529 E ns/openshift-monitoring pod/prometheus-operator-6ff8876c78-hpv59 node/ip-10-0-130-16.us-west-1.compute.internal container=prometheus-operator container exited with code 1 (Error):  ts=2020-03-05T01:32:45.409530332Z caller=operator.go:452 component=prometheusoperator msg="connection established" cluster-version=v1.17.1\nlevel=info ts=2020-03-05T01:32:45.763040375Z caller=operator.go:655 component=alertmanageroperator msg="CRD updated" crd=Alertmanager\nlevel=info ts=2020-03-05T01:32:45.788195439Z caller=operator.go:682 component=thanosoperator msg="CRD updated" crd=ThanosRuler\nlevel=info ts=2020-03-05T01:32:45.803642373Z caller=operator.go:1918 component=prometheusoperator msg="CRD updated" crd=Prometheus\nlevel=info ts=2020-03-05T01:32:45.83035036Z caller=operator.go:1918 component=prometheusoperator msg="CRD updated" crd=ServiceMonitor\nlevel=info ts=2020-03-05T01:32:45.851098549Z caller=operator.go:1918 component=prometheusoperator msg="CRD updated" crd=PodMonitor\nlevel=info ts=2020-03-05T01:32:45.863988414Z caller=operator.go:1918 component=prometheusoperator msg="CRD updated" crd=PrometheusRule\nlevel=info ts=2020-03-05T01:32:48.864409517Z caller=operator.go:230 component=alertmanageroperator msg="CRD API endpoints ready"\nlevel=info ts=2020-03-05T01:32:49.07583777Z caller=operator.go:185 component=alertmanageroperator msg="successfully synced all caches"\nlevel=warn ts=2020-03-05T01:32:49.076380057Z caller=operator.go:516 component=alertmanageroperator msg="alertmanager key=openshift-monitoring/main, field spec.baseImage is deprecated, 'spec.image' field should be used instead"\nlevel=info ts=2020-03-05T01:32:49.076563115Z caller=operator.go:459 component=alertmanageroperator msg="sync alertmanager" key=openshift-monitoring/main\nlevel=info ts=2020-03-05T01:32:49.171060701Z caller=operator.go:459 component=alertmanageroperator msg="sync alertmanager" key=openshift-monitoring/main\nts=2020-03-05T01:32:52.507237435Z caller=main.go:304 msg="Unhandled error received. Exiting..." err="creating CRDs failed: waiting for PrometheusRule crd failed: timed out waiting for Custom Resource: failed to list CRD: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field"\n
Mar 05 01:33:10.420 E ns/openshift-operator-lifecycle-manager pod/packageserver-cccd87fcd-kgs7z node/ip-10-0-147-82.us-west-1.compute.internal container=packageserver container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:33:20.311 E ns/openshift-marketplace pod/redhat-marketplace-56c7f76897-rszqx node/ip-10-0-149-63.us-west-1.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Mar 05 01:33:54.420 E ns/openshift-marketplace pod/community-operators-686f44876d-9qz6k node/ip-10-0-149-63.us-west-1.compute.internal container=community-operators container exited with code 2 (Error): 
Mar 05 01:35:05.320 E ns/openshift-etcd pod/etcd-ip-10-0-137-81.us-west-1.compute.internal node/ip-10-0-137-81.us-west-1.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-03-05 01:14:20.205919 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-137-81.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-137-81.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-03-05 01:14:20.207781 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-03-05 01:14:20.208255 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-137-81.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-137-81.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-03-05 01:14:20.210811 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/03/05 01:14:20 grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.137.81:9978: connect: connection refused". Reconnecting...\nWARNING: 2020/03/05 01:14:21 grpc: addrConn.createTransport failed to connect to {https://etcd-2.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.137.81:9978: connect: connection refused". Reconnecting...\n
Mar 05 01:35:05.376 E ns/openshift-controller-manager pod/controller-manager-fsw7z node/ip-10-0-137-81.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error):  image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0305 01:18:14.883574       1 build_controller.go:474] Starting build controller\nI0305 01:18:14.883641       1 build_controller.go:476] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nI0305 01:18:14.865648       1 deleted_token_secrets.go:69] caches synced\nW0305 01:28:54.033506       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 569; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:32:17.006862       1 reflector.go:340] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: watch of *v1.TemplateInstance ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 701; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:32:17.048377       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 585; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:32:17.048561       1 reflector.go:340] github.com/openshift/client-go/apps/informers/externalversions/factory.go:101: watch of *v1.DeploymentConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 617; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:32:17.048705       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 529; INTERNAL_ERROR") has prevented the request from succeeding\n
Mar 05 01:35:05.409 E ns/openshift-cluster-node-tuning-operator pod/tuned-vbgxg node/ip-10-0-137-81.us-west-1.compute.internal container=tuned container exited with code 143 (Error): ested: openshift-control-plane\nI0305 01:18:25.533594     920 tuned.go:170] disabling system tuned...\nI0305 01:18:25.532334     920 tuned.go:521] tuned "rendered" added\nI0305 01:18:25.534039     920 tuned.go:219] extracting tuned profiles\nI0305 01:18:25.540523     920 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0305 01:18:26.521081     920 tuned.go:393] getting recommended profile...\nI0305 01:18:26.662204     920 tuned.go:421] active profile () != recommended profile (openshift-control-plane)\nI0305 01:18:26.662272     920 tuned.go:286] starting tuned...\n2020-03-05 01:18:26,793 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-03-05 01:18:26,801 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-03-05 01:18:26,802 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-03-05 01:18:26,802 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-03-05 01:18:26,803 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-03-05 01:18:26,843 INFO     tuned.daemon.controller: starting controller\n2020-03-05 01:18:26,844 INFO     tuned.daemon.daemon: starting tuning\n2020-03-05 01:18:26,855 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-03-05 01:18:26,856 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-03-05 01:18:26,859 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-03-05 01:18:26,860 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-03-05 01:18:26,862 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-03-05 01:18:26,990 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-03-05 01:18:27,002 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\n
Mar 05 01:35:05.424 E ns/openshift-sdn pod/sdn-controller-mh6wn node/ip-10-0-137-81.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error): I0305 01:19:45.207532       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\n
Mar 05 01:35:05.451 E ns/openshift-sdn pod/ovs-5nwqk node/ip-10-0-137-81.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): 2020-03-05T01:32:36.322Z|00030|jsonrpc|WARN|Dropped 5 log messages in last 17 seconds (most recently, 12 seconds ago) due to excessive rate\n2020-03-05T01:32:36.322Z|00031|jsonrpc|WARN|unix#646: receive error: Connection reset by peer\n2020-03-05T01:32:36.322Z|00032|reconnect|WARN|unix#646: connection dropped (Connection reset by peer)\n2020-03-05T01:32:39.437Z|00166|connmgr|INFO|br0<->unix#783: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:32:39.497Z|00167|connmgr|INFO|br0<->unix#786: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:32:39.532Z|00168|bridge|INFO|bridge br0: deleted interface veth1818c267 on port 75\n2020-03-05T01:32:41.770Z|00169|bridge|INFO|bridge br0: added interface vethf955c050 on port 76\n2020-03-05T01:32:41.808Z|00170|connmgr|INFO|br0<->unix#792: 5 flow_mods in the last 0 s (5 adds)\n2020-03-05T01:32:41.857Z|00171|connmgr|INFO|br0<->unix#796: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:32:41.861Z|00172|connmgr|INFO|br0<->unix#798: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-03-05T01:32:43.323Z|00173|connmgr|INFO|br0<->unix#801: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:32:43.373Z|00174|connmgr|INFO|br0<->unix#804: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:32:43.412Z|00175|bridge|INFO|bridge br0: deleted interface veth93ab142f on port 71\n2020-03-05T01:32:43.568Z|00176|connmgr|INFO|br0<->unix#807: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:32:43.610Z|00177|connmgr|INFO|br0<->unix#810: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:32:43.645Z|00178|bridge|INFO|bridge br0: deleted interface vethb480f464 on port 65\n2020-03-05T01:32:45.412Z|00179|connmgr|INFO|br0<->unix#815: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:32:45.445Z|00180|connmgr|INFO|br0<->unix#818: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:32:45.467Z|00181|bridge|INFO|bridge br0: deleted interface vethf955c050 on port 76\ninfo: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Mar 05 01:35:05.487 E ns/openshift-multus pod/multus-t6zxl node/ip-10-0-137-81.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 05 01:35:05.534 E ns/openshift-machine-config-operator pod/machine-config-daemon-dblk6 node/ip-10-0-137-81.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:35:05.550 E ns/openshift-machine-config-operator pod/machine-config-server-dvc8d node/ip-10-0-137-81.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0305 01:28:46.779292       1 start.go:38] Version: machine-config-daemon-4.5.0-202003042001-4-g09266642-dirty (092666426506d8d2b71ef0b17a7af0e955398d8f)\nI0305 01:28:46.780581       1 api.go:51] Launching server on :22624\nI0305 01:28:46.780651       1 api.go:51] Launching server on :22623\n
Mar 05 01:35:05.576 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-81.us-west-1.compute.internal node/ip-10-0-137-81.us-west-1.compute.internal container=cluster-policy-controller container exited with code 1 (Error): chine.openshift.io/v1beta1, Resource=machinehealthchecks", couldn't start monitor for resource "template.openshift.io/v1, Resource=templates": unable to monitor quota for resource "template.openshift.io/v1, Resource=templates", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheuses": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "operators.coreos.com/v2, Resource=catalogsourceconfigs": unable to monitor quota for resource "operators.coreos.com/v2, Resource=catalogsourceconfigs", couldn't start monitor for resource "operators.coreos.com/v1, Resource=operatorgroups": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorgroups", couldn't start monitor for resource "ingress.operator.openshift.io/v1, Resource=dnsrecords": unable to monitor quota for resource "ingress.operator.openshift.io/v1, Resource=dnsrecords", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=thanosrulers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=thanosrulers"]\nI0305 01:30:30.277207       1 policy_controller.go:144] Started "openshift.io/cluster-quota-reconciliation"\nI0305 01:30:30.277220       1 policy_controller.go:147] Started Origin Controllers\nI0305 01:30:30.277237       1 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller\nI0305 01:30:30.277269       1 reconciliation_controller.go:134] Starting the cluster quota reconciliation controller\nI0305 01:30:30.277315       1 resource_quota_monitor.go:303] QuotaMonitor running\nI0305 01:30:30.369323       1 shared_informer.go:204] Caches are synced for resource quota \nW0305 01:32:17.006888       1 reflector.go:326] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 459; INTERNAL_ERROR") has prevented the request from succeeding\n
Mar 05 01:35:05.576 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-81.us-west-1.compute.internal node/ip-10-0-137-81.us-west-1.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:32:13.958831       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:32:13.959223       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:32:17.835522       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:32:17.835852       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:32:23.979492       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:32:23.979821       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:32:27.847665       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:32:27.848042       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:32:33.990970       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:32:33.991599       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:32:37.855681       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:32:37.855982       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:32:44.007641       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:32:44.008315       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:32:47.866117       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:32:47.866460       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Mar 05 01:35:05.576 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-81.us-west-1.compute.internal node/ip-10-0-137-81.us-west-1.compute.internal container=kube-controller-manager container exited with code 2 (Error): ent(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"community-operators", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FailedToCreateEndpoint' Failed to create endpoint for service openshift-marketplace/community-operators: endpoints "community-operators" already exists\nI0305 01:32:51.294330       1 replica_set.go:561] Too few replicas for ReplicaSet openshift-marketplace/redhat-marketplace-7f5f54b785, need 1, creating 1\nI0305 01:32:51.295083       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-marketplace", Name:"redhat-marketplace", UID:"1626fd78-bd96-4601-bc98-4c6245e2bf26", APIVersion:"apps/v1", ResourceVersion:"40933", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redhat-marketplace-7f5f54b785 to 1\nI0305 01:32:51.309509       1 deployment_controller.go:484] Error syncing deployment openshift-marketplace/redhat-marketplace: Operation cannot be fulfilled on deployments.apps "redhat-marketplace": the object has been modified; please apply your changes to the latest version and try again\nI0305 01:32:51.336331       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"openshift-marketplace", Name:"redhat-marketplace-7f5f54b785", UID:"28c1aadc-2a14-4b4c-b26a-a23227adccd3", APIVersion:"apps/v1", ResourceVersion:"40935", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redhat-marketplace-7f5f54b785-8wqfn\nI0305 01:32:51.368076       1 endpoints_controller.go:340] Error syncing endpoints for service "openshift-marketplace/redhat-marketplace", retrying. Error: endpoints "redhat-marketplace" already exists\nI0305 01:32:51.370201       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"", Name:"redhat-marketplace", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FailedToCreateEndpoint' Failed to create endpoint for service openshift-marketplace/redhat-marketplace: endpoints "redhat-marketplace" already exists\n
Mar 05 01:35:05.576 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-81.us-west-1.compute.internal node/ip-10-0-137-81.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): 7] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-03-05 00:37:36 +0000 UTC to 2030-03-03 00:37:36 +0000 UTC (now=2020-03-05 01:15:37.411941855 +0000 UTC))\nI0305 01:15:37.412009       1 tlsconfig.go:157] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1583369606" [] issuer="kubelet-signer" (2020-03-05 00:53:25 +0000 UTC to 2020-03-06 00:37:41 +0000 UTC (now=2020-03-05 01:15:37.411995163 +0000 UTC))\nI0305 01:15:37.412062       1 tlsconfig.go:157] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "aggregator-signer" [] issuer="<self>" (2020-03-05 00:37:39 +0000 UTC to 2020-03-06 00:37:39 +0000 UTC (now=2020-03-05 01:15:37.412048536 +0000 UTC))\nI0305 01:15:37.412455       1 tlsconfig.go:179] loaded serving cert ["serving-cert::/tmp/serving-cert-593971195/tls.crt::/tmp/serving-cert-593971195/tls.key"]: "localhost" [serving] validServingFor=[localhost] issuer="cert-recovery-controller-signer@1583370934" (2020-03-05 01:15:35 +0000 UTC to 2020-04-04 01:15:36 +0000 UTC (now=2020-03-05 01:15:37.41243495 +0000 UTC))\nI0305 01:15:37.412834       1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1583370937" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583370936" (2020-03-05 00:15:36 +0000 UTC to 2021-03-05 00:15:36 +0000 UTC (now=2020-03-05 01:15:37.412815623 +0000 UTC))\nI0305 01:32:52.394399       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0305 01:32:52.394494       1 leaderelection.go:67] leaderelection lost\n
Mar 05 01:35:05.602 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-81.us-west-1.compute.internal node/ip-10-0-137-81.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): s been compacted\nE0305 01:32:52.571821       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0305 01:32:52.598639       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0305 01:32:52.598869       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0305 01:32:52.598895       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nW0305 01:32:52.599506       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [10.0.130.16 10.0.147.82]\nE0305 01:32:52.599579       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nI0305 01:32:52.619514       1 genericapiserver.go:643] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-ip-10-0-137-81.us-west-1.compute.internal", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished\nE0305 01:32:52.646302       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0305 01:32:52.646467       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0305 01:32:52.646497       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0305 01:32:52.646834       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0305 01:32:52.647011       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0305 01:32:52.705707       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0305 01:32:52.745689       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\nE0305 01:32:52.746550       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted\n
Mar 05 01:35:05.602 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-81.us-west-1.compute.internal node/ip-10-0-137-81.us-west-1.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0305 01:15:32.764348       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Mar 05 01:35:05.602 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-81.us-west-1.compute.internal node/ip-10-0-137-81.us-west-1.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0305 01:32:34.472825       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:32:34.473197       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0305 01:32:44.483102       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:32:44.490858       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Mar 05 01:35:05.602 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-81.us-west-1.compute.internal node/ip-10-0-137-81.us-west-1.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error): W0305 01:15:32.228519       1 cmd.go:200] Using insecure, self-signed certificates\nI0305 01:15:32.229023       1 crypto.go:580] Generating new CA for cert-regeneration-controller-signer@1583370932 cert, and key in /tmp/serving-cert-400773889/serving-signer.crt, /tmp/serving-cert-400773889/serving-signer.key\nI0305 01:15:33.379628       1 observer_polling.go:155] Starting file observer\nI0305 01:15:36.992708       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-apiserver/cert-regeneration-controller-lock...\nI0305 01:32:52.519259       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0305 01:32:52.519367       1 leaderelection.go:67] leaderelection lost\n
Mar 05 01:35:05.627 E ns/openshift-multus pod/multus-admission-controller-ml9mg node/ip-10-0-137-81.us-west-1.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Mar 05 01:35:05.657 E ns/openshift-cluster-version pod/cluster-version-operator-6f6f7897cc-k7s7w node/ip-10-0-137-81.us-west-1.compute.internal container=cluster-version-operator container exited with code 255 (Error): :52.329835       1 task_graph.go:568] Canceled worker 15\nI0305 01:32:52.329844       1 task_graph.go:568] Canceled worker 12\nI0305 01:32:52.329850       1 task_graph.go:568] Canceled worker 9\nI0305 01:32:52.329867       1 task_graph.go:568] Canceled worker 5\nI0305 01:32:52.329877       1 task_graph.go:568] Canceled worker 13\nI0305 01:32:52.329877       1 cvo.go:439] Started syncing cluster version "openshift-cluster-version/version" (2020-03-05 01:32:52.329868068 +0000 UTC m=+29.850489465)\nI0305 01:32:52.330005       1 cvo.go:468] Desired version from spec is v1.Update{Version:"", Image:"registry.svc.ci.openshift.org/ci-op-p6dgihk0/release@sha256:a98e0a96523f3ccca7ed1d0883988188769d941e7e020cf7ed21cc2fbf061517", Force:true}\nI0305 01:32:52.329776       1 task_graph.go:568] Canceled worker 6\nI0305 01:32:52.330117       1 task_graph.go:588] Workers finished\nI0305 01:32:52.330189       1 task_graph.go:596] Result of work: [Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable]\nI0305 01:32:52.330244       1 sync_worker.go:783] Summarizing 1 errors\nI0305 01:32:52.330127       1 cvo.go:441] Finished syncing cluster version "openshift-cluster-version/version" (252.77┬Ás)\nI0305 01:32:52.330285       1 sync_worker.go:787] Update error 146 of 580: ClusterOperatorDegraded Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable (*errors.errorString: cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable)\nI0305 01:32:52.330326       1 cvo.go:366] Shutting down ClusterVersionOperator\nE0305 01:32:52.330366       1 sync_worker.go:329] unable to synchronize image (waiting 21.565712806s): Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable\nF0305 01:32:52.372733       1 start.go:148] Received shutdown signal twice, exiting\n
Mar 05 01:35:05.679 E ns/openshift-monitoring pod/node-exporter-czwlj node/ip-10-0-137-81.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:31:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:31:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:32:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:32:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:32:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:32:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:32:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:35:10.904 E ns/openshift-multus pod/multus-t6zxl node/ip-10-0-137-81.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Mar 05 01:35:12.334 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-137-81.us-west-1.compute.internal node/ip-10-0-137-81.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): e found feasible.\nI0305 01:32:41.797937       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6b66c54b4c-rpbrp: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0305 01:32:43.331465       1 scheduler.go:751] pod openshift-authentication/oauth-openshift-845cf56b57-224mz is bound successfully on node "ip-10-0-130-16.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0305 01:32:46.798870       1 factory.go:453] Unable to schedule openshift-machine-config-operator/etcd-quorum-guard-6b66c54b4c-rpbrp: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0305 01:32:47.802057       1 factory.go:453] Unable to schedule openshift-apiserver/apiserver-6646bf8b4d-p5sjt: no fit: 0/6 nodes are available: 2 node(s) didn't match node selector, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) were unschedulable.; waiting\nI0305 01:32:49.122958       1 scheduler.go:751] pod openshift-marketplace/redhat-operators-67496cbf75-ddjv5 is bound successfully on node "ip-10-0-149-63.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0305 01:32:49.998009       1 scheduler.go:751] pod openshift-marketplace/certified-operators-59d456cdcd-7kd7g is bound successfully on node "ip-10-0-149-63.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0305 01:32:51.075890       1 scheduler.go:751] pod openshift-marketplace/community-operators-559ccfdb9-4gswn is bound successfully on node "ip-10-0-149-63.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\nI0305 01:32:51.353950       1 scheduler.go:751] pod openshift-marketplace/redhat-marketplace-7f5f54b785-8wqfn is bound successfully on node "ip-10-0-149-63.us-west-1.compute.internal", 6 nodes evaluated, 2 nodes were found feasible.\n
Mar 05 01:35:14.454 E ns/openshift-multus pod/multus-t6zxl node/ip-10-0-137-81.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Mar 05 01:35:23.536 E ns/openshift-machine-config-operator pod/machine-config-daemon-dblk6 node/ip-10-0-137-81.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 05 01:35:36.355 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-6cc84d4b7b-sxn69 node/ip-10-0-147-82.us-west-1.compute.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): -version-migrator-operator", UID:"f917a14a-f5cc-4e57-9a1d-ef8990dacef8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from False to True ("Progressing: deployment/migrator.openshift-kube-storage-version-migrator:: observed generation is 1, desired generation is 2.")\nI0305 01:15:51.856747       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"f917a14a-f5cc-4e57-9a1d-ef8990dacef8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("")\nI0305 01:31:48.775220       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"f917a14a-f5cc-4e57-9a1d-ef8990dacef8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from True to False ("Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available")\nI0305 01:31:57.015191       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"f917a14a-f5cc-4e57-9a1d-ef8990dacef8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0305 01:35:32.285470       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0305 01:35:32.285602       1 leaderelection.go:66] leaderelection lost\n
Mar 05 01:35:36.916 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-557b9f6d69-7pljm node/ip-10-0-147-82.us-west-1.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): e_controller.go:215] Shutting down APIServiceController_openshift-apiserver\nI0305 01:35:32.760699       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0305 01:35:32.760739       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0305 01:35:32.760777       1 base_controller.go:73] Shutting down  ...\nI0305 01:35:32.760810       1 base_controller.go:73] Shutting down UnsupportedConfigOverridesController ...\nI0305 01:35:32.760827       1 base_controller.go:73] Shutting down LoggingSyncer ...\nI0305 01:35:32.760841       1 status_controller.go:212] Shutting down StatusSyncer-openshift-apiserver\nI0305 01:35:32.760856       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0305 01:35:32.760894       1 finalizer_controller.go:148] Shutting down NamespaceFinalizerController_openshift-apiserver\nI0305 01:35:32.760910       1 prune_controller.go:232] Shutting down PruneController\nI0305 01:35:32.760925       1 base_controller.go:73] Shutting down RevisionController ...\nI0305 01:35:32.761226       1 base_controller.go:48] Shutting down worker of  controller ...\nI0305 01:35:32.761237       1 base_controller.go:38] All  workers have been terminated\nI0305 01:35:32.761255       1 base_controller.go:48] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0305 01:35:32.761263       1 base_controller.go:38] All UnsupportedConfigOverridesController workers have been terminated\nI0305 01:35:32.761279       1 base_controller.go:48] Shutting down worker of LoggingSyncer controller ...\nI0305 01:35:32.761287       1 base_controller.go:38] All LoggingSyncer workers have been terminated\nI0305 01:35:32.761316       1 base_controller.go:48] Shutting down worker of RevisionController controller ...\nI0305 01:35:32.761323       1 base_controller.go:38] All RevisionController workers have been terminated\nI0305 01:35:32.761464       1 workload_controller.go:204] Shutting down OpenShiftAPIServerOperator\nF0305 01:35:32.761877       1 builder.go:210] server exited\n
Mar 05 01:35:38.808 E ns/openshift-insights pod/insights-operator-64cd9bcd65-pqcdt node/ip-10-0-147-82.us-west-1.compute.internal container=operator container exited with code 2 (Error): metheus/2.15.2 10.128.2.34:58666]\nI0305 01:32:54.705772       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0305 01:32:54.722681       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209\nI0305 01:32:58.227966       1 httplog.go:90] GET /metrics: (24.384439ms) 200 [Prometheus/2.15.2 10.131.0.23:43806]\nI0305 01:33:18.226007       1 status.go:298] The operator is healthy\nI0305 01:33:24.624765       1 httplog.go:90] GET /metrics: (9.913326ms) 200 [Prometheus/2.15.2 10.128.2.34:58666]\nI0305 01:33:28.216791       1 httplog.go:90] GET /metrics: (14.480202ms) 200 [Prometheus/2.15.2 10.131.0.23:43806]\nI0305 01:33:54.622571       1 httplog.go:90] GET /metrics: (8.315609ms) 200 [Prometheus/2.15.2 10.128.2.34:58666]\nI0305 01:33:58.213467       1 httplog.go:90] GET /metrics: (11.460963ms) 200 [Prometheus/2.15.2 10.131.0.23:43806]\nI0305 01:34:18.210113       1 configobserver.go:65] Refreshing configuration from cluster pull secret\nI0305 01:34:18.216265       1 configobserver.go:90] Found cloud.openshift.com token\nI0305 01:34:18.216290       1 configobserver.go:107] Refreshing configuration from cluster secret\nI0305 01:34:24.622256       1 httplog.go:90] GET /metrics: (7.970424ms) 200 [Prometheus/2.15.2 10.128.2.34:58666]\nI0305 01:34:28.204186       1 httplog.go:90] GET /metrics: (2.176895ms) 200 [Prometheus/2.15.2 10.131.0.23:43806]\nI0305 01:34:54.623548       1 httplog.go:90] GET /metrics: (9.134399ms) 200 [Prometheus/2.15.2 10.128.2.34:58666]\nI0305 01:34:58.205796       1 httplog.go:90] GET /metrics: (3.578828ms) 200 [Prometheus/2.15.2 10.131.0.23:43806]\nI0305 01:35:18.227225       1 status.go:298] The operator is healthy\nI0305 01:35:24.622305       1 httplog.go:90] GET /metrics: (7.864029ms) 200 [Prometheus/2.15.2 10.128.2.34:58666]\nI0305 01:35:28.208565       1 httplog.go:90] GET /metrics: (6.47487ms) 200 [Prometheus/2.15.2 10.131.0.23:43806]\n
Mar 05 01:35:39.764 E ns/openshift-machine-config-operator pod/machine-config-operator-7b59b9bf79-2kkl2 node/ip-10-0-147-82.us-west-1.compute.internal container=machine-config-operator container exited with code 2 (Error): nfig...\nE0305 01:26:11.540473       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"2e970c0b-2747-4486-9b59-0858664d0112", ResourceVersion:"35198", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718966397, loc:(*time.Location)(0x27fa020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-7b59b9bf79-2kkl2_937bff27-234d-4b21-ad7e-e69618d18108\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-03-05T01:26:11Z\",\"renewTime\":\"2020-03-05T01:26:11Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-7b59b9bf79-2kkl2_937bff27-234d-4b21-ad7e-e69618d18108 became leader'\nI0305 01:26:11.540586       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0305 01:26:12.194972       1 operator.go:264] Starting MachineConfigOperator\nI0305 01:26:12.201504       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"10a470a3-944e-4ef7-87db-7f63a6d53680", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-03-05-003028}] to [{operator 0.0.1-2020-03-05-003700}]\n
Mar 05 01:35:40.376 E ns/openshift-service-ca pod/service-ca-d6fb8cc66-tns8l node/ip-10-0-147-82.us-west-1.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Mar 05 01:35:40.668 E ns/openshift-monitoring pod/thanos-querier-7d6fbb7bdc-zm2z4 node/ip-10-0-147-82.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/03/05 01:17:03 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/03/05 01:17:03 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/05 01:17:03 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/05 01:17:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/03/05 01:17:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/05 01:17:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/03/05 01:17:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/05 01:17:03 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/03/05 01:17:03 http.go:107: HTTPS: listening on [::]:9091\nI0305 01:17:03.230972       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Mar 05 01:35:40.728 E ns/openshift-machine-api pod/machine-api-controllers-f7bd84946-8rbkp node/ip-10-0-147-82.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 
Mar 05 01:35:40.789 E ns/openshift-machine-api pod/machine-api-controllers-f7bd84946-8rbkp node/ip-10-0-147-82.us-west-1.compute.internal container=machine-healthcheck-controller container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:35:41.588 E ns/openshift-cluster-machine-approver pod/machine-approver-995f688f9-d4vgw node/ip-10-0-147-82.us-west-1.compute.internal container=machine-approver-controller container exited with code 2 (Error): .\nI0305 01:29:16.262001       1 config.go:33] using default as failed to load config /var/run/configmaps/config/config.yaml: open /var/run/configmaps/config/config.yaml: no such file or directory\nI0305 01:29:16.262032       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0305 01:29:16.262079       1 main.go:236] Starting Machine Approver\nI0305 01:29:16.362407       1 main.go:146] CSR csr-pxzm2 added\nI0305 01:29:16.362554       1 main.go:149] CSR csr-pxzm2 is already approved\nI0305 01:29:16.362691       1 main.go:146] CSR csr-2rwmc added\nI0305 01:29:16.362745       1 main.go:149] CSR csr-2rwmc is already approved\nI0305 01:29:16.362786       1 main.go:146] CSR csr-66dpg added\nI0305 01:29:16.362852       1 main.go:149] CSR csr-66dpg is already approved\nI0305 01:29:16.362897       1 main.go:146] CSR csr-9fxqn added\nI0305 01:29:16.362935       1 main.go:149] CSR csr-9fxqn is already approved\nI0305 01:29:16.362972       1 main.go:146] CSR csr-d98sm added\nI0305 01:29:16.363038       1 main.go:149] CSR csr-d98sm is already approved\nI0305 01:29:16.363083       1 main.go:146] CSR csr-fx96p added\nI0305 01:29:16.363122       1 main.go:149] CSR csr-fx96p is already approved\nI0305 01:29:16.363160       1 main.go:146] CSR csr-hqjxs added\nI0305 01:29:16.363228       1 main.go:149] CSR csr-hqjxs is already approved\nI0305 01:29:16.363274       1 main.go:146] CSR csr-4rr7h added\nI0305 01:29:16.363314       1 main.go:149] CSR csr-4rr7h is already approved\nI0305 01:29:16.363359       1 main.go:146] CSR csr-hjnzb added\nI0305 01:29:16.363433       1 main.go:149] CSR csr-hjnzb is already approved\nI0305 01:29:16.363477       1 main.go:146] CSR csr-rbg2m added\nI0305 01:29:16.363517       1 main.go:149] CSR csr-rbg2m is already approved\nI0305 01:29:16.363586       1 main.go:146] CSR csr-trcm7 added\nI0305 01:29:16.363628       1 main.go:149] CSR csr-trcm7 is already approved\nI0305 01:29:16.363672       1 main.go:146] CSR csr-wrf47 added\nI0305 01:29:16.363753       1 main.go:149] CSR csr-wrf47 is already approved\n
Mar 05 01:35:41.697 E ns/openshift-service-ca-operator pod/service-ca-operator-85fdbb449b-jjkc4 node/ip-10-0-147-82.us-west-1.compute.internal container=operator container exited with code 255 (Error): 
Mar 05 01:35:42.249 E ns/openshift-machine-api pod/machine-api-operator-57f9cb8f78-c8lzs node/ip-10-0-147-82.us-west-1.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Mar 05 01:35:42.401 E ns/openshift-service-catalog-apiserver-operator pod/openshift-service-catalog-apiserver-operator-69c465b87f-8pvw6 node/ip-10-0-147-82.us-west-1.compute.internal container=operator container exited with code 255 (Error): 4:33414]\nI0305 01:34:56.062006       1 workload_controller.go:329] No service bindings found, nothing to delete.\nI0305 01:34:56.076417       1 workload_controller.go:181] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0305 01:34:56.493868       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0305 01:35:06.506209       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0305 01:35:07.093432       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0305 01:35:07.093459       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0305 01:35:07.094755       1 httplog.go:90] GET /metrics: (7.08731ms) 200 [Prometheus/2.15.2 10.131.0.23:33976]\nI0305 01:35:16.058532       1 workload_controller.go:329] No service bindings found, nothing to delete.\nI0305 01:35:16.073977       1 workload_controller.go:181] apiservice v1beta1.servicecatalog.k8s.io deleted\nI0305 01:35:16.520856       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0305 01:35:18.566000       1 handler.go:153] openshift-cluster-svcat-apiserver-operator: GET "/metrics" satisfied by nonGoRestful\nI0305 01:35:18.566254       1 pathrecorder.go:240] openshift-cluster-svcat-apiserver-operator: "/metrics" satisfied by exact match\nI0305 01:35:18.567991       1 httplog.go:90] GET /metrics: (9.448425ms) 200 [Prometheus/2.15.2 10.128.2.34:33414]\nI0305 01:35:26.537886       1 leaderelection.go:282] successfully renewed lease openshift-service-catalog-apiserver-operator/openshift-cluster-svcat-apiserver-operator-lock\nI0305 01:35:35.932699       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0305 01:35:35.932858       1 leaderelection.go:66] leaderelection lost\n
Mar 05 01:35:42.714 E ns/openshift-authentication-operator pod/authentication-operator-56bc546ccf-9m7tz node/ip-10-0-147-82.us-west-1.compute.internal container=operator container exited with code 255 (Error): nt replicas are ready","reason":"_OAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-03-05T01:06:33Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-03-05T00:53:24Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0305 01:35:36.080590       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"32e64b6d-577d-4197-917e-5f1ede191d8c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("Progressing: not all deployment replicas are ready")\nI0305 01:35:36.832524       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:35:36.833728       1 status_controller.go:212] Shutting down StatusSyncer-authentication\nI0305 01:35:36.833827       1 logging_controller.go:93] Shutting down LogLevelController\nI0305 01:35:36.833896       1 management_state_controller.go:112] Shutting down management-state-controller-authentication\nI0305 01:35:36.833937       1 unsupportedconfigoverrides_controller.go:162] Shutting down UnsupportedConfigOverridesController\nI0305 01:35:36.833990       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nI0305 01:35:36.834032       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0305 01:35:36.834063       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0305 01:35:36.834083       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0305 01:35:36.834097       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0305 01:35:36.834237       1 builder.go:243] stopped\n
Mar 05 01:35:42.755 E ns/openshift-etcd-operator pod/etcd-operator-f95c5854f-lqfzh node/ip-10-0-147-82.us-west-1.compute.internal container=operator container exited with code 255 (Error): Controller\nI0305 01:35:39.938061       1 base_controller.go:74] Shutting down NodeController ...\nI0305 01:35:39.938081       1 base_controller.go:74] Shutting down RevisionController ...\nI0305 01:35:39.938113       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0305 01:35:39.938132       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0305 01:35:39.938150       1 base_controller.go:74] Shutting down InstallerController ...\nI0305 01:35:39.938175       1 base_controller.go:74] Shutting down  ...\nI0305 01:35:39.938284       1 base_controller.go:49] Shutting down worker of  controller ...\nI0305 01:35:39.938296       1 base_controller.go:39] All  workers have been terminated\nI0305 01:35:39.938332       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0305 01:35:39.938342       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nI0305 01:35:39.938366       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0305 01:35:39.938376       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0305 01:35:39.938398       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0305 01:35:39.938407       1 base_controller.go:39] All PruneController workers have been terminated\nI0305 01:35:39.938440       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0305 01:35:39.938450       1 base_controller.go:39] All NodeController workers have been terminated\nI0305 01:35:39.938473       1 base_controller.go:49] Shutting down worker of RevisionController controller ...\nI0305 01:35:39.938485       1 base_controller.go:39] All RevisionController workers have been terminated\nI0305 01:35:39.938507       1 base_controller.go:49] Shutting down worker of  controller ...\nI0305 01:35:39.938517       1 base_controller.go:39] All  workers have been terminated\nF0305 01:35:39.939254       1 builder.go:243] stopped\n
Mar 05 01:35:44.741 E ns/openshift-console-operator pod/console-operator-7ff5d865d6-fcqrs node/ip-10-0-147-82.us-west-1.compute.internal container=console-operator container exited with code 255 (Error): ransitionTime":"2020-03-05T01:18:11Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-03-05T01:35:42Z","message":"DeploymentAvailable: 1 replicas ready at version 0.0.1-2020-03-05-003700","reason":"Deployment_FailedUpdate","status":"False","type":"Available"},{"lastTransitionTime":"2020-03-05T00:57:42Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0305 01:35:42.571789       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"dd8877ca-227e-4f8a-a4ee-a8f860bd0975", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from True to False ("DeploymentAvailable: 1 replicas ready at version 0.0.1-2020-03-05-003700")\nI0305 01:35:42.620505       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:35:42.620703       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0305 01:35:42.621238       1 controller.go:70] Shutting down Console\nI0305 01:35:42.621243       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0305 01:35:42.621330       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0305 01:35:42.621348       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0305 01:35:42.621366       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0305 01:35:42.621379       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0305 01:35:42.621530       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0305 01:35:42.621569       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0305 01:35:42.621579       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nF0305 01:35:42.621581       1 builder.go:210] server exited\n
Mar 05 01:35:44.788 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-6654d8b29 node/ip-10-0-147-82.us-west-1.compute.internal container=operator container exited with code 255 (Error): 5.2 10.129.2.58:57638]\nI0305 01:31:33.862651       1 httplog.go:90] GET /metrics: (5.841676ms) 200 [Prometheus/2.15.2 10.128.2.34:34534]\nI0305 01:32:03.863482       1 httplog.go:90] GET /metrics: (6.439068ms) 200 [Prometheus/2.15.2 10.128.2.34:34534]\nI0305 01:32:33.869325       1 httplog.go:90] GET /metrics: (11.751963ms) 200 [Prometheus/2.15.2 10.128.2.34:34534]\nI0305 01:32:47.099444       1 httplog.go:90] GET /metrics: (5.07199ms) 200 [Prometheus/2.15.2 10.131.0.23:50810]\nI0305 01:33:03.862799       1 httplog.go:90] GET /metrics: (5.895862ms) 200 [Prometheus/2.15.2 10.128.2.34:34534]\nI0305 01:33:17.093527       1 httplog.go:90] GET /metrics: (6.261657ms) 200 [Prometheus/2.15.2 10.131.0.23:50810]\nI0305 01:33:33.869830       1 httplog.go:90] GET /metrics: (12.822916ms) 200 [Prometheus/2.15.2 10.128.2.34:34534]\nI0305 01:33:47.092510       1 httplog.go:90] GET /metrics: (5.171364ms) 200 [Prometheus/2.15.2 10.131.0.23:50810]\nI0305 01:34:03.862737       1 httplog.go:90] GET /metrics: (5.832911ms) 200 [Prometheus/2.15.2 10.128.2.34:34534]\nI0305 01:34:17.092987       1 httplog.go:90] GET /metrics: (5.655413ms) 200 [Prometheus/2.15.2 10.131.0.23:50810]\nI0305 01:34:33.862650       1 httplog.go:90] GET /metrics: (5.772703ms) 200 [Prometheus/2.15.2 10.128.2.34:34534]\nI0305 01:34:47.092411       1 httplog.go:90] GET /metrics: (5.09971ms) 200 [Prometheus/2.15.2 10.131.0.23:50810]\nI0305 01:35:03.863368       1 httplog.go:90] GET /metrics: (6.4057ms) 200 [Prometheus/2.15.2 10.128.2.34:34534]\nI0305 01:35:17.093911       1 httplog.go:90] GET /metrics: (6.455448ms) 200 [Prometheus/2.15.2 10.131.0.23:50810]\nI0305 01:35:20.827338       1 reflector.go:383] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 0 items received\nI0305 01:35:33.907882       1 httplog.go:90] GET /metrics: (49.448546ms) 200 [Prometheus/2.15.2 10.128.2.34:34534]\nI0305 01:35:42.710743       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0305 01:35:42.710870       1 leaderelection.go:66] leaderelection lost\n
Mar 05 01:36:09.045 E ns/openshift-console pod/console-844786c8db-gcxpk node/ip-10-0-147-82.us-west-1.compute.internal container=console container exited with code 2 (Error): //oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-03-05T01:19:38Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-03-05T01:19:40Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com: dial tcp: lookup oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: read udp 10.128.0.64:59486->172.30.0.10:53: i/o timeout\n2020-03-05T01:19:40Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com: dial tcp: lookup oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: read udp 10.128.0.64:59486->172.30.0.10:53: i/o timeout\n2020-03-05T01:32:35Z http: TLS handshake error from 10.131.0.7:40064: read tcp 10.128.0.64:8443->10.131.0.7:40064: read: connection reset by peer\n2020-03-05T01:33:14Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Mar 05 01:36:24.734 E kube-apiserver failed contacting the API: Get https://api.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=40998&timeout=7m13s&timeoutSeconds=433&watch=true: dial tcp 52.52.156.19:6443: connect: connection refused
Mar 05 01:36:32.106 E ns/openshift-network-operator pod/network-operator-7c44c66579-nqskb node/ip-10-0-137-81.us-west-1.compute.internal container=network-operator container exited with code 1 (Error): 2020/03/05 01:36:31 Go Version: go1.12.16\n2020/03/05 01:36:31 Go OS/Arch: linux/amd64\n2020/03/05 01:36:31 operator-sdk Version: v0.12.0\n2020/03/05 01:36:31 overriding kubernetes api to https://api-int.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443\n2020/03/05 01:36:31 Get https://api-int.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api?timeout=32s: dial tcp 10.0.137.104:6443: connect: connection refused\n
Mar 05 01:36:34.358 E ns/openshift-operator-lifecycle-manager pod/packageserver-5d6fbd78c6-gmv4c node/ip-10-0-130-16.us-west-1.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-03-05T01:36:33Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Mar 05 01:36:36.361 E ns/openshift-operator-lifecycle-manager pod/packageserver-5d6fbd78c6-gmv4c node/ip-10-0-130-16.us-west-1.compute.internal container=packageserver container exited with code 1 (Error): C_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA\n      --tls-min-version string                                  Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13\n      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])\n  -v, --v Level                                                 number for the log level verbosity (default 0)\n      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n\ntime="2020-03-05T01:36:35Z" level=fatal msg="Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 172.30.0.1:443: connect: connection refused"\n
Mar 05 01:37:47.022 E ns/openshift-cluster-node-tuning-operator pod/tuned-vspg2 node/ip-10-0-141-142.us-west-1.compute.internal container=tuned container exited with code 143 (Error): 8:06.110121    1016 tuned.go:219] extracting tuned profiles\nI0305 01:18:06.112128    1016 tuned.go:469] profile "ip-10-0-141-142.us-west-1.compute.internal" added, tuned profile requested: openshift-node\nI0305 01:18:06.112156    1016 tuned.go:170] disabling system tuned...\nI0305 01:18:06.118364    1016 tuned.go:176] failed to disable system tuned: Failed to execute operation: Unit file tuned.service does not exist.\nI0305 01:18:07.018186    1016 tuned.go:393] getting recommended profile...\nI0305 01:18:07.139140    1016 tuned.go:421] active profile () != recommended profile (openshift-node)\nI0305 01:18:07.139230    1016 tuned.go:286] starting tuned...\n2020-03-05 01:18:07,249 INFO     tuned.daemon.application: dynamic tuning is globally disabled\n2020-03-05 01:18:07,255 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)\n2020-03-05 01:18:07,255 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-03-05 01:18:07,256 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-03-05 01:18:07,257 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-03-05 01:18:07,292 INFO     tuned.daemon.controller: starting controller\n2020-03-05 01:18:07,292 INFO     tuned.daemon.daemon: starting tuning\n2020-03-05 01:18:07,304 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-03-05 01:18:07,305 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-03-05 01:18:07,308 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-03-05 01:18:07,311 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-03-05 01:18:07,312 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-03-05 01:18:07,422 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-03-05 01:18:07,432 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\n
Mar 05 01:37:47.053 E ns/openshift-monitoring pod/node-exporter-rzmd6 node/ip-10-0-141-142.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:35:09Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:35:21Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:35:24Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:35:36Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:35:39Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:35:51Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:35:54Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:37:47.076 E ns/openshift-sdn pod/ovs-hc7lt node/ip-10-0-141-142.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): nix#679: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:31:57.728Z|00209|connmgr|INFO|br0<->unix#682: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:31:57.757Z|00210|bridge|INFO|bridge br0: deleted interface veth578d4ea4 on port 4\n2020-03-05T01:31:57.804Z|00211|connmgr|INFO|br0<->unix#686: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:31:57.845Z|00212|connmgr|INFO|br0<->unix#689: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:31:57.876Z|00213|bridge|INFO|bridge br0: deleted interface vethd66e6118 on port 12\n2020-03-05T01:31:57.973Z|00214|connmgr|INFO|br0<->unix#692: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:31:58.017Z|00215|connmgr|INFO|br0<->unix#695: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:31:58.051Z|00216|bridge|INFO|bridge br0: deleted interface veth52f99d2a on port 10\n2020-03-05T01:32:17.606Z|00217|connmgr|INFO|br0<->unix#712: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:32:17.642Z|00218|connmgr|INFO|br0<->unix#715: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:32:17.670Z|00219|bridge|INFO|bridge br0: deleted interface vethd40221d8 on port 6\n2020-03-05T01:32:33.916Z|00220|connmgr|INFO|br0<->unix#730: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:32:33.944Z|00221|connmgr|INFO|br0<->unix#733: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:32:33.966Z|00222|bridge|INFO|bridge br0: deleted interface veth0a2dcb4c on port 19\n2020-03-05T01:32:41.069Z|00223|connmgr|INFO|br0<->unix#739: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:32:41.096Z|00224|connmgr|INFO|br0<->unix#742: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:32:41.118Z|00225|bridge|INFO|bridge br0: deleted interface vethb44c86b3 on port 7\n2020-03-05T01:33:45.304Z|00021|jsonrpc|WARN|unix#719: receive error: Connection reset by peer\n2020-03-05T01:33:45.304Z|00022|reconnect|WARN|unix#719: connection dropped (Connection reset by peer)\ninfo: Saving flows ...\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\n
Mar 05 01:37:47.092 E ns/openshift-multus pod/multus-lkv7l node/ip-10-0-141-142.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 05 01:37:47.108 E ns/openshift-machine-config-operator pod/machine-config-daemon-vr427 node/ip-10-0-141-142.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:37:51.020 E ns/openshift-multus pod/multus-lkv7l node/ip-10-0-141-142.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Mar 05 01:37:56.078 E ns/openshift-machine-config-operator pod/machine-config-daemon-vr427 node/ip-10-0-141-142.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 05 01:38:04.535 E ns/openshift-monitoring pod/openshift-state-metrics-54ddd7687b-fbxfs node/ip-10-0-142-46.us-west-1.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Mar 05 01:38:04.685 E ns/openshift-monitoring pod/thanos-querier-7d6fbb7bdc-746gr node/ip-10-0-142-46.us-west-1.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/03/05 01:16:39 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/03/05 01:16:39 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/03/05 01:16:39 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/03/05 01:16:39 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/03/05 01:16:39 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/03/05 01:16:39 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/03/05 01:16:39 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/03/05 01:16:39 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0305 01:16:39.556789       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/03/05 01:16:39 http.go:107: HTTPS: listening on [::]:9091\n
Mar 05 01:38:05.846 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-46.us-west-1.compute.internal container=alertmanager container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:38:05.846 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-46.us-west-1.compute.internal container=alertmanager-proxy container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:38:05.846 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-142-46.us-west-1.compute.internal container=config-reloader container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated
Mar 05 01:38:05.867 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-46.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-03-05T01:17:42.593Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-03-05T01:17:42.601Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-03-05T01:17:42.602Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-03-05T01:17:42.603Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-03-05T01:17:42.603Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-03-05T01:17:42.603Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-03-05T01:17:42.604Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-03-05T01:17:42.604Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-03-05T01:17:42.604Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-03-05T01:17:42.604Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-03-05
Mar 05 01:38:05.867 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-46.us-west-1.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/03/05 01:17:46 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Mar 05 01:38:05.867 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-142-46.us-west-1.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-03-05T01:17:45.701414781Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-03-05T01:17:45.703997017Z caller=runutil.go:95 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-03-05T01:17:50.940336742Z caller=reloader.go:286 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-03-05T01:17:50.94042403Z caller=reloader.go:154 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
Mar 05 01:38:16.247 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-141-142.us-west-1.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-03-05T01:38:13.789Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-03-05T01:38:13.790Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-03-05T01:38:13.791Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-03-05T01:38:13.792Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-03-05T01:38:13.792Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-03-05T01:38:13.792Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-03-05T01:38:13.792Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-03-05T01:38:13.792Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-03-05T01:38:13.792Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-03-05T01:38:13.792Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-03-05T01:38:13.792Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-03-05T01:38:13.792Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-03-05T01:38:13.792Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-03-05T01:38:13.792Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-03-05T01:38:13.796Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-03-05T01:38:13.796Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-03-05
Mar 05 01:38:39.916 E ns/openshift-controller-manager pod/controller-manager-c92bc node/ip-10-0-147-82.us-west-1.compute.internal container=controller-manager container exited with code 1 (Error): 05 01:33:49.474493       1 factory.go:80] Deployer controller caches are synced. Starting workers.\nI0305 01:33:49.659385       1 deleted_token_secrets.go:69] caches synced\nI0305 01:33:49.659587       1 docker_registry_service.go:154] caches synced\nI0305 01:33:49.659589       1 create_dockercfg_secrets.go:218] urls found\nI0305 01:33:49.659667       1 create_dockercfg_secrets.go:224] caches synced\nI0305 01:33:49.659774       1 docker_registry_service.go:296] Updating registry URLs from map[172.30.237.95:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}] to map[172.30.237.95:5000:{} image-registry.openshift-image-registry.svc.cluster.local:5000:{} image-registry.openshift-image-registry.svc:5000:{}]\nI0305 01:33:49.672495       1 deleted_dockercfg_secrets.go:74] caches synced\nI0305 01:33:49.693854       1 build_controller.go:474] Starting build controller\nI0305 01:33:49.693870       1 build_controller.go:476] OpenShift image registry hostname: image-registry.openshift-image-registry.svc:5000\nW0305 01:35:32.249435       1 reflector.go:340] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.Image ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 339; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:35:32.249575       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.BuildConfig ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 317; INTERNAL_ERROR") has prevented the request from succeeding\nW0305 01:35:32.249707       1 reflector.go:340] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: watch of *v1.Build ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 329; INTERNAL_ERROR") has prevented the request from succeeding\n
Mar 05 01:38:39.941 E ns/openshift-cluster-node-tuning-operator pod/tuned-46htq node/ip-10-0-147-82.us-west-1.compute.internal container=tuned container exited with code 143 (Error): what profile is recommended for your configuration.\n2020-03-05 01:17:42,045 INFO     tuned.daemon.daemon: Using 'openshift-control-plane' profile\n2020-03-05 01:17:42,069 INFO     tuned.profiles.loader: loading profile: openshift-control-plane\n2020-03-05 01:17:42,517 INFO     tuned.daemon.controller: starting controller\n2020-03-05 01:17:42,517 INFO     tuned.daemon.daemon: starting tuning\n2020-03-05 01:17:42,542 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-03-05 01:17:42,543 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-03-05 01:17:42,578 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-03-05 01:17:42,584 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-03-05 01:17:42,595 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-03-05 01:17:43,733 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-03-05 01:17:43,805 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-control-plane' applied\nI0305 01:32:52.851912    3356 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:32:52.888596    3356 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:35:58.020853    3356 tuned.go:494] profile "ip-10-0-147-82.us-west-1.compute.internal" changed, tuned profile requested: openshift-node\nI0305 01:35:58.067449    3356 tuned.go:494] profile "ip-10-0-147-82.us-west-1.compute.internal" changed, tuned profile requested: openshift-control-plane\nI0305 01:35:58.676946    3356 tuned.go:393] getting recommended profile...\nI0305 01:35:58.841354    3356 tuned.go:430] active and recommended profile (openshift-control-plane) match; profile change will not trigger profile reload\nI0305 01:36:24.449367    3356 tuned.go:115] received signal: terminated\nI0305 01:36:24.449402    3356 tuned.go:327] sending TERM to PID 3645\n
Mar 05 01:38:39.957 E ns/openshift-monitoring pod/node-exporter-h76kp node/ip-10-0-147-82.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:35:37Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:35:47Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:35:52Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:36:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:36:07Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:36:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:36:22Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:38:40.007 E ns/openshift-sdn pod/sdn-controller-8ntt6 node/ip-10-0-147-82.us-west-1.compute.internal container=sdn-controller container exited with code 2 (Error):  leader lease  openshift-sdn/openshift-network-controller...\nE0305 01:19:54.093345       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"openshift-network-controller", GenerateName:"", Namespace:"openshift-sdn", SelfLink:"/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller", UID:"f7437590-0c35-4214-bb34-45bc0d5d2cbf", ResourceVersion:"32537", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63718966179, loc:(*time.Location)(0x2b2b940)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"ip-10-0-147-82\",\"leaseDurationSeconds\":60,\"acquireTime\":\"2020-03-05T00:49:39Z\",\"renewTime\":\"2020-03-05T01:19:54Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'ip-10-0-147-82 became leader'\nI0305 01:19:54.093466       1 leaderelection.go:252] successfully acquired lease openshift-sdn/openshift-network-controller\nI0305 01:19:54.100172       1 master.go:51] Initializing SDN master\nI0305 01:19:54.173311       1 network_controller.go:61] Started OpenShift Network Controller\nE0305 01:30:42.023196       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.HostSubnet: Get https://api-int.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/hostsubnets?allowWatchBookmarks=true&resourceVersion=30167&timeout=5m28s&timeoutSeconds=328&watch=true: dial tcp 10.0.137.104:6443: connect: connection refused\n
Mar 05 01:38:40.023 E ns/openshift-multus pod/multus-admission-controller-24mnt node/ip-10-0-147-82.us-west-1.compute.internal container=multus-admission-controller container exited with code 255 (Error): 
Mar 05 01:38:40.036 E ns/openshift-multus pod/multus-tg7nt node/ip-10-0-147-82.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 05 01:38:40.061 E ns/openshift-machine-config-operator pod/machine-config-daemon-qr5k7 node/ip-10-0-147-82.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:38:40.079 E ns/openshift-machine-config-operator pod/machine-config-server-mzxq8 node/ip-10-0-147-82.us-west-1.compute.internal container=machine-config-server container exited with code 2 (Error): I0305 01:29:16.485652       1 start.go:38] Version: machine-config-daemon-4.5.0-202003042001-4-g09266642-dirty (092666426506d8d2b71ef0b17a7af0e955398d8f)\nI0305 01:29:16.487199       1 api.go:51] Launching server on :22624\nI0305 01:29:16.487731       1 api.go:51] Launching server on :22623\n
Mar 05 01:38:40.099 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=kube-apiserver container exited with code 1 (Error): nsport is closing"\nI0305 01:36:24.274412       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.274557       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.274782       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.275021       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.275982       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.276193       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.276346       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.278406       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.278721       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.283744       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.284135       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nI0305 01:36:24.284571       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\nW0305 01:36:24.284889       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.0.147.82:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.147.82:2379: connect: connection refused". Reconnecting...\nI0305 01:36:24.425591       1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing"\n
Mar 05 01:38:40.099 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=kube-apiserver-insecure-readyz container exited with code 2 (Error): I0305 01:16:39.471483       1 readyz.go:103] Listening on 0.0.0.0:6080\n
Mar 05 01:38:40.099 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=kube-apiserver-cert-syncer container exited with code 2 (Error): ce-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0305 01:36:15.016573       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:36:15.016952       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nI0305 01:36:24.162301       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:36:24.168858       1 certsync_controller.go:162] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\n
Mar 05 01:38:40.099 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=kube-apiserver-cert-regeneration-controller container exited with code 255 (Error):  client_cert_rotation_controller.go:121] Waiting for CertRotationController - "LocalhostServing"\nI0305 01:30:34.386242       1 client_cert_rotation_controller.go:128] Finished waiting for CertRotationController - "LocalhostServing"\nI0305 01:32:55.572919       1 externalloadbalancer.go:26] syncing external loadbalancer hostnames: api.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com\nI0305 01:36:24.149138       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:36:24.149770       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostServing"\nI0305 01:36:24.150607       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeAPIServerToKubeletClientCert"\nI0305 01:36:24.150621       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "AggregatorProxyClientCert"\nI0305 01:36:24.150641       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ServiceNetworkServing"\nI0305 01:36:24.150650       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeControllerManagerClient"\nI0305 01:36:24.150663       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "LocalhostRecoveryServing"\nI0305 01:36:24.150680       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "InternalLoadBalancerServing"\nI0305 01:36:24.150692       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "ExternalLoadBalancerServing"\nI0305 01:36:24.150701       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "KubeSchedulerClient"\nI0305 01:36:24.150711       1 certrotationcontroller.go:560] Shutting down CertRotation\nI0305 01:36:24.150720       1 cabundlesyncer.go:84] Shutting down CA bundle controller\nI0305 01:36:24.151985       1 cabundlesyncer.go:86] CA bundle controller shut down\nF0305 01:36:24.160699       1 leaderelection.go:67] leaderelection lost\n
Mar 05 01:38:40.117 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=kube-controller-manager-recovery-controller container exited with code 1 (Error):  [] issuer="<self>" (2020-03-05 00:37:39 +0000 UTC to 2020-03-06 00:37:39 +0000 UTC (now=2020-03-05 01:17:37.879924382 +0000 UTC))\nI0305 01:17:37.880326       1 tlsconfig.go:179] loaded serving cert ["serving-cert::/tmp/serving-cert-791755570/tls.crt::/tmp/serving-cert-791755570/tls.key"]: "localhost" [serving] validServingFor=[localhost] issuer="cert-recovery-controller-signer@1583371053" (2020-03-05 01:17:33 +0000 UTC to 2020-04-04 01:17:34 +0000 UTC (now=2020-03-05 01:17:37.880307975 +0000 UTC))\nI0305 01:17:37.880706       1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1583371057" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583371057" (2020-03-05 00:17:37 +0000 UTC to 2021-03-05 00:17:37 +0000 UTC (now=2020-03-05 01:17:37.880688272 +0000 UTC))\nI0305 01:29:33.695775       1 leaderelection.go:252] successfully acquired lease openshift-kube-controller-manager/cert-recovery-controller-lock\nI0305 01:29:33.696237       1 event.go:281] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-kube-controller-manager", Name:"cert-recovery-controller-lock", UID:"29e7b6b7-7de8-4bfe-a7de-30172b4c811e", APIVersion:"v1", ResourceVersion:"37415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 7aad966a-4455-4d11-b466-1580fbec4dca became leader\nI0305 01:29:33.703457       1 csrcontroller.go:81] Starting CSR controller\nI0305 01:29:33.703612       1 shared_informer.go:197] Waiting for caches to sync for CSRController\nI0305 01:29:33.904056       1 shared_informer.go:204] Caches are synced for CSRController \nI0305 01:36:24.167316       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0305 01:36:24.167750       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0305 01:36:24.168012       1 csrcontroller.go:83] Shutting down CSR controller\nI0305 01:36:24.171739       1 csrcontroller.go:85] CSR controller shut down\nF0305 01:36:24.168205       1 builder.go:209] server exited\n
Mar 05 01:38:40.117 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=kube-controller-manager-cert-syncer container exited with code 2 (Error):     1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:35:44.896578       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:35:44.897064       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:35:53.888389       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:35:53.888767       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:35:54.906634       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:35:54.907071       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:36:03.897652       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:36:03.898013       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:36:04.916090       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:36:04.916484       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:36:13.907570       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:36:13.907977       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:36:14.925133       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:36:14.925703       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\nI0305 01:36:23.917338       1 certsync_controller.go:65] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nI0305 01:36:23.917921       1 certsync_controller.go:162] Syncing secrets: [{csr-signer false}]\n
Mar 05 01:38:40.117 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=kube-controller-manager container exited with code 2 (Error): ] loaded client CA [5/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-03-05 00:37:36 +0000 UTC to 2030-03-03 00:37:36 +0000 UTC (now=2020-03-05 01:17:31.173348318 +0000 UTC))\nI0305 01:17:31.173388       1 tlsconfig.go:179] loaded client CA [6/"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt"]: "aggregator-signer" [] issuer="<self>" (2020-03-05 00:37:39 +0000 UTC to 2020-03-06 00:37:39 +0000 UTC (now=2020-03-05 01:17:31.173374453 +0000 UTC))\nI0305 01:17:31.173830       1 tlsconfig.go:201] loaded serving cert ["serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key"]: "kube-controller-manager.openshift-kube-controller-manager.svc" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer="openshift-service-serving-signer@1583369605" (2020-03-05 00:53:38 +0000 UTC to 2022-03-05 00:53:39 +0000 UTC (now=2020-03-05 01:17:31.173805989 +0000 UTC))\nI0305 01:17:31.174281       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1583371051" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1583371049" (2020-03-05 00:17:29 +0000 UTC to 2021-03-05 00:17:29 +0000 UTC (now=2020-03-05 01:17:31.1742602 +0000 UTC))\nI0305 01:17:31.174329       1 secure_serving.go:178] Serving securely on [::]:10257\nI0305 01:17:31.174374       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0305 01:17:31.174503       1 tlsconfig.go:241] Starting DynamicServingCertificateController\n
Mar 05 01:38:40.117 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=cluster-policy-controller container exited with code 255 (Error): l tcp [::1]:6443: connect: connection refused\nE0305 01:37:08.664161       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: Get https://localhost:6443/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=41686&timeout=8m21s&timeoutSeconds=501&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0305 01:37:08.665166       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://localhost:6443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=30266&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0305 01:37:08.668545       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://localhost:6443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=40976&timeout=6m53s&timeoutSeconds=413&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0305 01:37:08.670725       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=43887&timeout=7m48s&timeoutSeconds=468&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0305 01:37:08.671888       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=38877&timeout=6m49s&timeoutSeconds=409&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0305 01:37:08.918054       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0305 01:37:08.918104       1 policy_controller.go:94] leaderelection lost\nI0305 01:37:08.925307       1 reconciliation_controller.go:152] Shutting down ClusterQuotaReconcilationController\n
Mar 05 01:38:40.130 E ns/openshift-etcd pod/etcd-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=etcd-metrics container exited with code 2 (Error): 2020-03-05 01:12:05.262249 I | etcdmain: ServerTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-147-82.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-serving-metrics/etcd-serving-metrics-ip-10-0-147-82.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-metrics-proxy-serving-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-03-05 01:12:05.263172 I | etcdmain: listening for grpc-proxy client requests on 127.0.0.1:9977\n2020-03-05 01:12:05.263570 I | etcdmain: ClientTLS: cert = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-147-82.us-west-1.compute.internal.crt, key = /etc/kubernetes/static-pod-certs/secrets/etcd-all-peer/etcd-peer-ip-10-0-147-82.us-west-1.compute.internal.key, ca = /etc/kubernetes/static-pod-certs/configmaps/etcd-peer-client-ca/ca-bundle.crt, trusted-ca = , client-cert-auth = false, crl-file = \n2020-03-05 01:12:05.265543 I | etcdmain: grpc-proxy: listening for metrics on https://0.0.0.0:9979\nWARNING: 2020/03/05 01:12:05 grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.147.82:9978: connect: connection refused". Reconnecting...\nWARNING: 2020/03/05 01:12:06 grpc: addrConn.createTransport failed to connect to {https://etcd-1.ci-op-p6dgihk0-f83f1.origin-ci-int-aws.dev.rhcloud.com:9978 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.0.147.82:9978: connect: connection refused". Reconnecting...\n
Mar 05 01:38:40.146 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-147-82.us-west-1.compute.internal node/ip-10-0-147-82.us-west-1.compute.internal container=scheduler container exited with code 2 (Error): casets.apps)\nE0305 01:17:36.633601       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0305 01:17:36.633754       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0305 01:17:36.633834       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: unknown (get pods)\nE0305 01:17:36.633965       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0305 01:17:36.634083       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0305 01:17:36.634266       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0305 01:17:36.634347       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0305 01:17:39.485195       1 eventhandlers.go:242] scheduler cache UpdatePod failed: pod a23f08fc-54aa-4240-bb3b-ceff3a69c5cd is not added to scheduler cache, so cannot be updated\nE0305 01:17:40.082686       1 eventhandlers.go:242] scheduler cache UpdatePod failed: pod a23f08fc-54aa-4240-bb3b-ceff3a69c5cd is not added to scheduler cache, so cannot be updated\nE0305 01:17:47.763469       1 eventhandlers.go:242] scheduler cache UpdatePod failed: pod a23f08fc-54aa-4240-bb3b-ceff3a69c5cd is not added to scheduler cache, so cannot be updated\nE0305 01:17:48.769650       1 eventhandlers.go:242] scheduler cache UpdatePod failed: pod a23f08fc-54aa-4240-bb3b-ceff3a69c5cd is not added to scheduler cache, so cannot be updated\nE0305 01:17:50.033029       1 eventhandlers.go:242] scheduler cache UpdatePod failed: pod a23f08fc-54aa-4240-bb3b-ceff3a69c5cd is not added to scheduler cache, so cannot be updated\n
Mar 05 01:38:40.161 E ns/openshift-sdn pod/ovs-5mh4k node/ip-10-0-147-82.us-west-1.compute.internal container=openvswitch container exited with code 1 (Error): :02.664Z|00273|bridge|INFO|bridge br0: added interface veth96602293 on port 94\n2020-03-05T01:36:02.709Z|00274|connmgr|INFO|br0<->unix#1181: 5 flow_mods in the last 0 s (5 adds)\n2020-03-05T01:36:02.772Z|00275|connmgr|INFO|br0<->unix#1185: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:36:02.773Z|00276|connmgr|INFO|br0<->unix#1187: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-03-05T01:36:06.127Z|00277|connmgr|INFO|br0<->unix#1193: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:36:06.213Z|00278|connmgr|INFO|br0<->unix#1196: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:36:06.239Z|00279|bridge|INFO|bridge br0: deleted interface veth96602293 on port 94\n2020-03-05T01:36:08.118Z|00280|bridge|INFO|bridge br0: added interface vethfbc38aaa on port 95\n2020-03-05T01:36:08.184Z|00281|connmgr|INFO|br0<->unix#1200: 5 flow_mods in the last 0 s (5 adds)\n2020-03-05T01:36:08.262Z|00282|connmgr|INFO|br0<->unix#1204: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:36:08.280Z|00283|connmgr|INFO|br0<->unix#1207: 2 flow_mods in the last 0 s (1 adds, 1 deletes)\n2020-03-05T01:36:08.300Z|00284|connmgr|INFO|br0<->unix#1209: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:36:08.357Z|00285|connmgr|INFO|br0<->unix#1212: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:36:08.406Z|00286|bridge|INFO|bridge br0: deleted interface vethde36e158 on port 65\n2020-03-05T01:36:12.085Z|00287|connmgr|INFO|br0<->unix#1220: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:36:12.116Z|00288|connmgr|INFO|br0<->unix#1223: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:36:12.142Z|00289|bridge|INFO|bridge br0: deleted interface vethfbc38aaa on port 95\ninfo: Saving flows ...\n2020-03-05T01:36:24Z|00001|jsonrpc|WARN|unix:/var/run/openvswitch/db.sock: receive error: Connection reset by peer\n2020-03-05T01:36:24Z|00002|reconnect|WARN|unix:/var/run/openvswitch/db.sock: connection dropped (Connection reset by peer)\novs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection reset by peer)\n
Mar 05 01:38:47.206 E ns/openshift-multus pod/multus-tg7nt node/ip-10-0-147-82.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Mar 05 01:38:50.433 E ns/openshift-machine-config-operator pod/machine-config-daemon-qr5k7 node/ip-10-0-147-82.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Mar 05 01:38:50.452 E ns/openshift-multus pod/multus-tg7nt node/ip-10-0-147-82.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Mar 05 01:38:58.583 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-147-82.us-west-1.compute.internal" not ready since 2020-03-05 01:38:40 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Mar 05 01:38:58.594 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-147-82.us-west-1.compute.internal" not ready since 2020-03-05 01:38:40 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Mar 05 01:38:58.597 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-147-82.us-west-1.compute.internal" not ready since 2020-03-05 01:38:40 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Mar 05 01:38:58.599 E clusteroperator/etcd changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-147-82.us-west-1.compute.internal" not ready since 2020-03-05 01:38:40 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)
Mar 05 01:40:46.655 E ns/openshift-monitoring pod/node-exporter-tzmd8 node/ip-10-0-142-46.us-west-1.compute.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:37:58Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:38:13Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:38:28Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:38:35Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:38:43Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:38:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-03-05T01:38:58Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Mar 05 01:40:46.674 E ns/openshift-cluster-node-tuning-operator pod/tuned-9658v node/ip-10-0-142-46.us-west-1.compute.internal container=tuned container exited with code 143 (Error): on.daemon: using sleep interval of 1 second(s)\n2020-03-05 01:18:13,913 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.\n2020-03-05 01:18:13,913 INFO     tuned.daemon.daemon: Using 'openshift-node' profile\n2020-03-05 01:18:13,914 INFO     tuned.profiles.loader: loading profile: openshift-node\n2020-03-05 01:18:14,002 INFO     tuned.daemon.controller: starting controller\n2020-03-05 01:18:14,003 INFO     tuned.daemon.daemon: starting tuning\n2020-03-05 01:18:14,016 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu0, cpu1\n2020-03-05 01:18:14,017 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform\n2020-03-05 01:18:14,021 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias\n2020-03-05 01:18:14,022 INFO     tuned.plugins.base: instance disk: assigning devices dm-0, xvda\n2020-03-05 01:18:14,024 INFO     tuned.plugins.base: instance net: assigning devices ens3\n2020-03-05 01:18:14,177 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl\n2020-03-05 01:18:14,185 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node' applied\nI0305 01:32:52.848515     938 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:32:52.848957     938 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:36:24.544821     938 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0305 01:36:24.544841     938 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0305 01:36:24.567116     938 reflector.go:340] github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: watch of *v1.Tuned ended with: very short watch: github.com/openshift/cluster-node-tuning-operator/pkg/tuned/tuned.go:598: Unexpected watch close - watch lasted less than a second and no items received\n
Mar 05 01:40:46.724 E ns/openshift-sdn pod/ovs-wmknl node/ip-10-0-142-46.us-west-1.compute.internal container=openvswitch container exited with code 143 (Error): idge br0: deleted interface veth0292a13d on port 36\n2020-03-05T01:38:04.760Z|00100|connmgr|INFO|br0<->unix#875: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:38:04.802Z|00101|connmgr|INFO|br0<->unix#878: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:38:04.830Z|00102|bridge|INFO|bridge br0: deleted interface veth416668bb on port 41\n2020-03-05T01:38:04.899Z|00103|connmgr|INFO|br0<->unix#881: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:38:04.947Z|00104|connmgr|INFO|br0<->unix#884: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:38:04.973Z|00105|bridge|INFO|bridge br0: deleted interface veth9497967c on port 35\n2020-03-05T01:38:33.544Z|00106|connmgr|INFO|br0<->unix#906: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:38:33.572Z|00107|connmgr|INFO|br0<->unix#909: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:38:33.592Z|00108|bridge|INFO|bridge br0: deleted interface veth5f26a6a1 on port 15\n2020-03-05T01:38:48.803Z|00109|connmgr|INFO|br0<->unix#924: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:38:48.832Z|00110|connmgr|INFO|br0<->unix#927: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:38:48.857Z|00111|bridge|INFO|bridge br0: deleted interface vethc82dcc1a on port 40\n2020-03-05T01:38:50.636Z|00017|jsonrpc|WARN|unix#828: receive error: Connection reset by peer\n2020-03-05T01:38:50.636Z|00018|reconnect|WARN|unix#828: connection dropped (Connection reset by peer)\n2020-03-05T01:38:50.643Z|00019|jsonrpc|WARN|unix#829: receive error: Connection reset by peer\n2020-03-05T01:38:50.643Z|00020|reconnect|WARN|unix#829: connection dropped (Connection reset by peer)\n2020-03-05T01:38:50.591Z|00112|connmgr|INFO|br0<->unix#932: 2 flow_mods in the last 0 s (2 deletes)\n2020-03-05T01:38:50.623Z|00113|connmgr|INFO|br0<->unix#935: 4 flow_mods in the last 0 s (4 deletes)\n2020-03-05T01:38:50.650Z|00114|bridge|INFO|bridge br0: deleted interface veth3868238a on port 37\ninfo: Saving flows ...\n2020-03-05T01:39:01Z|00001|fatal_signal|WARN|terminating with signal 15 (Terminated)\nTerminated\n
Mar 05 01:40:46.726 E ns/openshift-multus pod/multus-mfbjx node/ip-10-0-142-46.us-west-1.compute.internal container=kube-multus container exited with code 143 (Error): 
Mar 05 01:40:46.744 E ns/openshift-machine-config-operator pod/machine-config-daemon-cztch node/ip-10-0-142-46.us-west-1.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Mar 05 01:40:50.674 E ns/openshift-multus pod/multus-mfbjx node/ip-10-0-142-46.us-west-1.compute.internal invariant violation: pod may not transition Running->Pending
Mar 05 01:40:57.694 E ns/openshift-machine-config-operator pod/machine-config-daemon-cztch node/ip-10-0-142-46.us-west-1.compute.internal container=oauth-proxy container exited with code 1 (Error):