ResultSUCCESS
Tests 1 failed / 24 succeeded
Started2020-07-02 14:30
Elapsed1h35m
Work namespaceci-op-4ztwk55l
Refs release-4.4:37f5d3c7
194:61b017f5
pod81832c1f-bc70-11ea-b4cd-0a580a810390
repoopenshift/cloud-credential-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 35m44s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
172 error level events were detected during this test run:

Jul 02 15:20:00.075 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-204-1.us-east-2.compute.internal node/ip-10-0-204-1.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): /informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/csisnapshotcontrollers?allowWatchBookmarks=true&resourceVersion=18995&timeout=7m47s&timeoutSeconds=467&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:19:58.664971       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/ingress.operator.openshift.io/v1/dnsrecords?allowWatchBookmarks=true&resourceVersion=18989&timeout=8m25s&timeoutSeconds=505&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:19:58.990836       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0702 15:19:58.990977       1 controllermanager.go:291] leaderelection lost\nI0702 15:19:59.005128       1 pv_controller_base.go:310] Shutting down persistent volume controller\nI0702 15:19:59.077372       1 pv_controller_base.go:421] claim worker queue shutting down\nI0702 15:19:59.005178       1 deployment_controller.go:164] Shutting down deployment controller\nI0702 15:19:59.077418       1 pv_controller_base.go:364] volume worker queue shutting down\nI0702 15:19:59.005334       1 cleaner.go:89] Shutting down CSR cleaner controller\nI0702 15:19:59.005342       1 daemon_controller.go:281] Shutting down daemon sets controller\nI0702 15:19:59.005343       1 cronjob_controller.go:101] Shutting down CronJob Manager\nI0702 15:19:59.005352       1 tokens_controller.go:188] Shutting down\nI0702 15:19:59.005351       1 replica_set.go:193] Shutting down replicaset controller\nI0702 15:19:59.005362       1 attach_detach_controller.go:378] Shutting down attach detach controller\nI0702 15:19:59.005370       1 pvc_protection_controller.go:112] Shutting down PVC protection controller\nI0702 15:19:59.005379       1 endpoints_controller.go:198] Shutting down endpoint controller\nI0702 15:19:59.005390       1 controller.go:222] Shutting down service controller\n
Jul 02 15:20:00.076 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-204-1.us-east-2.compute.internal node/ip-10-0-204-1.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=20895&timeout=8m9s&timeoutSeconds=489&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:19:59.471861       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=14406&timeout=5m4s&timeoutSeconds=304&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:19:59.473001       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=21074&timeout=8m42s&timeoutSeconds=522&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:19:59.474281       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=14372&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:19:59.475364       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=14372&timeout=9m10s&timeoutSeconds=550&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:19:59.478871       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=20934&timeoutSeconds=418&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:19:59.642274       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0702 15:19:59.642319       1 server.go:257] leaderelection lost\n
Jul 02 15:20:25.153 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-204-1.us-east-2.compute.internal node/ip-10-0-204-1.us-east-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 02 15:20:31.192 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-204-1.us-east-2.compute.internal node/ip-10-0-204-1.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): managed/configmaps?allowWatchBookmarks=true&resourceVersion=19822&timeout=6m53s&timeoutSeconds=413&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:20:30.428586       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=19822&timeout=9m39s&timeoutSeconds=579&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:20:30.431813       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=19538&timeoutSeconds=381&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:20:30.432945       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-config-managed/secrets?allowWatchBookmarks=true&resourceVersion=19559&timeout=5m29s&timeoutSeconds=329&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:20:30.434207       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=19559&timeout=9m10s&timeoutSeconds=550&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:20:30.436985       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?allowWatchBookmarks=true&resourceVersion=21070&timeout=8m48s&timeoutSeconds=528&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:20:30.853771       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0702 15:20:30.853831       1 leaderelection.go:67] leaderelection lost\n
Jul 02 15:20:33.213 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-204-1.us-east-2.compute.internal node/ip-10-0-204-1.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): rks=true&resourceVersion=20942&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:20:32.532754       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: Get https://localhost:6443/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=18039&timeout=6m53s&timeoutSeconds=413&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:20:32.535545       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:6443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=14400&timeout=6m54s&timeoutSeconds=414&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:20:32.536529       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=14370&timeout=8m14s&timeoutSeconds=494&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:20:32.538273       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: Get https://localhost:6443/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=18039&timeout=5m59s&timeoutSeconds=359&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:20:32.539411       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=20716&timeout=7m10s&timeoutSeconds=430&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:20:32.917653       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0702 15:20:32.917702       1 policy_controller.go:94] leaderelection lost\n
Jul 02 15:24:07.934 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-6d85975bbb-4vgxs node/ip-10-0-154-159.us-east-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): Name:"kube-apiserver-operator", UID:"ed471680-344d-4eed-b027-c99b33791b64", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "ip-10-0-204-1.us-east-2.compute.internal" from revision 6 to 8 because static pod is ready\nI0702 15:21:55.171804       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ed471680-344d-4eed-b027-c99b33791b64", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 8"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 8" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8"\nI0702 15:21:57.338461       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ed471680-344d-4eed-b027-c99b33791b64", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-8 -n openshift-kube-apiserver:\ncause by changes in data.status\nI0702 15:22:04.544887       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ed471680-344d-4eed-b027-c99b33791b64", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-8-ip-10-0-204-1.us-east-2.compute.internal -n openshift-kube-apiserver because it was missing\nI0702 15:24:07.175421       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:24:07.175830       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nF0702 15:24:07.175895       1 builder.go:209] server exited\n
Jul 02 15:24:26.009 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-57945f58fd-hqlrf node/ip-10-0-154-159.us-east-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): ube-controller-manager-ip-10-0-204-1.us-east-2.compute.internal container=\"cluster-policy-controller\" is waiting: \"CrashLoopBackOff\" - \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-ip-10-0-204-1.us-east-2.compute.internal_openshift-kube-controller-manager(50fd2caa3135aa89bbe40b86c0a1d841)\"" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-204-1.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-204-1.us-east-2.compute.internal container=\"cluster-policy-controller\" is not ready"\nI0702 15:21:34.726741       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"5859de74-c9e0-4d3c-8907-35bdb540f98a", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-204-1.us-east-2.compute.internal pods/kube-controller-manager-ip-10-0-204-1.us-east-2.compute.internal container=\"cluster-policy-controller\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0702 15:24:25.224378       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:24:25.225573       1 base_controller.go:74] Shutting down RevisionController ...\nI0702 15:24:25.225602       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0702 15:24:25.226720       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0702 15:24:25.226805       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0702 15:24:25.226863       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0702 15:24:25.226889       1 targetconfigcontroller.go:644] Shutting down TargetConfigController\nF0702 15:24:25.226904       1 builder.go:243] stopped\n
Jul 02 15:24:31.026 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-6c99554b58-xls68 node/ip-10-0-154-159.us-east-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): 0Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-02T15:04:13Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0702 15:20:55.690681       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"4e097728-6d44-4b1d-a954-1bf50da2d5b6", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-204-1.us-east-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-204-1.us-east-2.compute.internal container=\"kube-scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0702 15:24:30.369817       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:24:30.372366       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0702 15:24:30.372395       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0702 15:24:30.372410       1 status_controller.go:212] Shutting down StatusSyncer-kube-scheduler\nI0702 15:24:30.372460       1 target_config_reconciler.go:126] Shutting down TargetConfigReconciler\nI0702 15:24:30.372478       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0702 15:24:30.372876       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0702 15:24:30.372895       1 base_controller.go:39] All UnsupportedConfigOverridesController workers have been terminated\nI0702 15:24:30.372915       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0702 15:24:30.372925       1 base_controller.go:39] All LoggingSyncer workers have been terminated\nF0702 15:24:30.373053       1 builder.go:243] stopped\n
Jul 02 15:24:43.166 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-7fb7cf448-9kcmd node/ip-10-0-154-159.us-east-2.compute.internal container=openshift-apiserver-operator container exited with code 255 (Error): failed]]\nI0702 15:24:42.299477       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:24:42.299892       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0702 15:24:42.300385       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0702 15:24:42.300406       1 key_controller.go:363] Shutting down EncryptionKeyController\nI0702 15:24:42.300416       1 condition_controller.go:202] Shutting down EncryptionConditionController\nI0702 15:24:42.300426       1 prune_controller.go:204] Shutting down EncryptionPruneController\nI0702 15:24:42.300435       1 state_controller.go:171] Shutting down EncryptionStateController\nI0702 15:24:42.300446       1 migration_controller.go:327] Shutting down EncryptionMigrationController\nI0702 15:24:42.300459       1 base_controller.go:73] Shutting down UnsupportedConfigOverridesController ...\nI0702 15:24:42.300469       1 base_controller.go:73] Shutting down LoggingSyncer ...\nI0702 15:24:42.300478       1 prune_controller.go:232] Shutting down PruneController\nI0702 15:24:42.300487       1 status_controller.go:212] Shutting down StatusSyncer-openshift-apiserver\nI0702 15:24:42.300497       1 finalizer_controller.go:148] Shutting down NamespaceFinalizerController_openshift-apiserver\nI0702 15:24:42.300508       1 base_controller.go:73] Shutting down  ...\nI0702 15:24:42.300518       1 base_controller.go:73] Shutting down RevisionController ...\nI0702 15:24:42.300526       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0702 15:24:42.300540       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0702 15:24:42.300564       1 workload_controller.go:177] Shutting down OpenShiftAPIServerOperator\nI0702 15:24:42.300579       1 apiservice_controller.go:215] Shutting down APIServiceController_openshift-apiserver\nF0702 15:24:42.300984       1 builder.go:243] stopped\nF0702 15:24:42.310117       1 builder.go:210] server exited\n
Jul 02 15:24:57.423 E ns/openshift-machine-api pod/machine-api-operator-959bc7ffb-ttj8c node/ip-10-0-160-31.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Jul 02 15:25:21.668 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-159.us-east-2.compute.internal node/ip-10-0-154-159.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:25:20.290161       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:25:20.290744       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:25:20.294347       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 15:25:20.294415       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 15:25:20.295297       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:26:22.318 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-31.us-east-2.compute.internal node/ip-10-0-160-31.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:26:20.148473       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:26:20.150804       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:26:20.152887       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 15:26:20.153831       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 15:26:20.155139       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:27:04.043 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-6845cc5f4-dbdnr node/ip-10-0-154-159.us-east-2.compute.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): ): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: {"conditions":[{"type":"Degraded","status":"False","lastTransitionTime":"2020-07-02T15:04:11Z","reason":"AsExpected"},{"type":"Progressing","status":"False","lastTransitionTime":"2020-07-02T15:04:12Z","reason":"AsExpected"},{"type":"Available","status":"False","lastTransitionTime":"2020-07-02T15:04:11Z","reason":"_NoMigratorPod","message":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-07-02T15:04:11Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-07-02-143022"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0702 15:13:33.908703       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"fa208467-3dc1-4d74-b72b-252fb0e50bbb", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0702 15:27:03.026987       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0702 15:27:03.027037       1 leaderelection.go:66] leaderelection lost\n
Jul 02 15:27:29.798 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-204-1.us-east-2.compute.internal node/ip-10-0-204-1.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:27:28.531112       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:27:28.533502       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:27:28.536425       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0702 15:27:28.537253       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\nI0702 15:27:28.536541       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\n
Jul 02 15:27:47.988 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-204-1.us-east-2.compute.internal node/ip-10-0-204-1.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:27:47.538781       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:27:47.540516       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:27:47.542306       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 15:27:47.542408       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 15:27:47.543010       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:28:31.451 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-159.us-east-2.compute.internal node/ip-10-0-154-159.us-east-2.compute.internal container=kube-controller-manager container exited with code 255 (Error): 50632       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: Get https://localhost:6443/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=25125&timeout=6m0s&timeoutSeconds=360&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:28:29.851827       1 reflector.go:307] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.SecurityContextConstraints: Get https://localhost:6443/apis/security.openshift.io/v1/securitycontextconstraints?allowWatchBookmarks=true&resourceVersion=22983&timeout=9m9s&timeoutSeconds=549&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:28:29.853031       1 reflector.go:307] github.com/openshift/client-go/template/informers/externalversions/factory.go:101: Failed to watch *v1.Template: Get https://localhost:6443/apis/template.openshift.io/v1/templates?allowWatchBookmarks=true&resourceVersion=25125&timeout=7m26s&timeoutSeconds=446&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:28:29.854493       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operators.coreos.com/v1alpha1/catalogsources?allowWatchBookmarks=true&resourceVersion=23971&timeout=9m42s&timeoutSeconds=582&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:28:29.855606       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=23550&timeout=6m21s&timeoutSeconds=381&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:28:30.510447       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0702 15:28:30.510573       1 controllermanager.go:291] leaderelection lost\n
Jul 02 15:28:47.596 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-86cb8d8f7b-9mcq9 node/ip-10-0-140-158.us-east-2.compute.internal container=operator container exited with code 255 (Error): tarting syncing operator at 2020-07-02 15:24:43.422093931 +0000 UTC m=+672.161701511\nI0702 15:24:43.443510       1 operator.go:148] Finished syncing operator at 21.410708ms\nI0702 15:28:26.241814       1 operator.go:146] Starting syncing operator at 2020-07-02 15:28:26.241802304 +0000 UTC m=+894.981410065\nI0702 15:28:26.261413       1 operator.go:148] Finished syncing operator at 19.60184ms\nI0702 15:28:38.643331       1 operator.go:146] Starting syncing operator at 2020-07-02 15:28:38.643321427 +0000 UTC m=+907.382929076\nI0702 15:28:38.664103       1 operator.go:148] Finished syncing operator at 20.77217ms\nI0702 15:28:40.331610       1 operator.go:146] Starting syncing operator at 2020-07-02 15:28:40.331600128 +0000 UTC m=+909.071207725\nI0702 15:28:40.802548       1 operator.go:148] Finished syncing operator at 470.940813ms\nI0702 15:28:40.802604       1 operator.go:146] Starting syncing operator at 2020-07-02 15:28:40.802597395 +0000 UTC m=+909.542205181\nI0702 15:28:41.036947       1 operator.go:148] Finished syncing operator at 234.334924ms\nI0702 15:28:41.037007       1 operator.go:146] Starting syncing operator at 2020-07-02 15:28:41.037000597 +0000 UTC m=+909.776608257\nI0702 15:28:41.161003       1 operator.go:148] Finished syncing operator at 123.993206ms\nI0702 15:28:46.607159       1 operator.go:146] Starting syncing operator at 2020-07-02 15:28:46.607148026 +0000 UTC m=+915.346755633\nI0702 15:28:46.631994       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:28:46.632547       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nI0702 15:28:46.632565       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0702 15:28:46.632578       1 logging_controller.go:93] Shutting down LogLevelController\nI0702 15:28:46.632762       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nF0702 15:28:46.632785       1 builder.go:243] stopped\n
Jul 02 15:28:48.737 E ns/openshift-kube-storage-version-migrator pod/migrator-86b998c9f-zxv4n node/ip-10-0-189-221.us-east-2.compute.internal container=migrator container exited with code 2 (Error): I0702 15:16:42.043465       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jul 02 15:28:56.270 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-154-159.us-east-2.compute.internal node/ip-10-0-154-159.us-east-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 02 15:28:56.333 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-159.us-east-2.compute.internal node/ip-10-0-154-159.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): ue&resourceVersion=24921&timeout=6m40s&timeoutSeconds=400&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:28:54.745371       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=23192&timeout=9m13s&timeoutSeconds=553&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:28:54.745575       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://localhost:6443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=22983&timeout=5m56s&timeoutSeconds=356&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:28:54.747413       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=20930&timeout=9m43s&timeoutSeconds=583&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:28:54.748860       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=23551&timeout=8m57s&timeoutSeconds=537&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:28:54.751161       1 reflector.go:307] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: Get https://localhost:6443/apis/quota.openshift.io/v1/clusterresourcequotas?allowWatchBookmarks=true&resourceVersion=22983&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:28:55.142018       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0702 15:28:55.142077       1 policy_controller.go:94] leaderelection lost\n
Jul 02 15:28:57.146 E ns/openshift-authentication-operator pod/authentication-operator-5cb6875bc5-kph7g node/ip-10-0-154-159.us-east-2.compute.internal container=operator container exited with code 255 (Error): alse","type":"Degraded"},{"lastTransitionTime":"2020-07-02T15:28:46Z","message":"Progressing: not all deployment replicas are ready","reason":"_OAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-07-02T15:18:46Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-02T15:04:10Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0702 15:28:51.300888       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"ffc4a6e7-f74c-46c3-b56c-93e1ca4cbb25", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing message changed from "Progressing: deployment's observed generation did not reach the expected generation" to "Progressing: not all deployment replicas are ready"\nI0702 15:28:56.237154       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:28:56.243578       1 controller.go:70] Shutting down AuthenticationOperator2\nI0702 15:28:56.243618       1 ingress_state_controller.go:157] Shutting down IngressStateController\nI0702 15:28:56.243639       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0702 15:28:56.243660       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nI0702 15:28:56.243678       1 controller.go:215] Shutting down RouterCertsDomainValidationController\nI0702 15:28:56.243716       1 logging_controller.go:93] Shutting down LogLevelController\nI0702 15:28:56.243775       1 unsupportedconfigoverrides_controller.go:162] Shutting down UnsupportedConfigOverridesController\nI0702 15:28:56.243795       1 status_controller.go:212] Shutting down StatusSyncer-authentication\nI0702 15:28:56.243828       1 management_state_controller.go:112] Shutting down management-state-controller-authentication\nF0702 15:28:56.244201       1 builder.go:210] server exited\n
Jul 02 15:29:03.034 E ns/openshift-monitoring pod/node-exporter-qxmdb node/ip-10-0-160-31.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T15:08:53Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:53Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 15:29:05.718 E ns/openshift-monitoring pod/openshift-state-metrics-55df495cdd-hd5v6 node/ip-10-0-140-158.us-east-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jul 02 15:29:11.485 E ns/openshift-monitoring pod/node-exporter-nhgwh node/ip-10-0-204-1.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T15:09:01Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T15:09:01Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 15:29:14.204 E ns/openshift-controller-manager pod/controller-manager-sv5vj node/ip-10-0-154-159.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): I0702 15:09:47.296538       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (v0.0.0-alpha.0-109-g75548a0)\nI0702 15:09:47.298855       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-4ztwk55l/stable-initial@sha256:2ade819b54479625070665c6d987dbf1b446d10c7ad495bf6d52c70711d8000e"\nI0702 15:09:47.298878       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-4ztwk55l/stable-initial@sha256:2e9fa701fb05ce0c7a3a0ce59d48165fbc50bedfbe3033f5eec1051fbda305b0"\nI0702 15:09:47.298965       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0702 15:09:47.299544       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nE0702 15:16:38.327279       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, square/go-jose: error in cryptographic primitive, token lookup failed]\n
Jul 02 15:29:14.391 E ns/openshift-controller-manager pod/controller-manager-qq2ql node/ip-10-0-204-1.us-east-2.compute.internal container=controller-manager container exited with code 137 (Error): decoding: unexpected EOF\nI0702 15:28:19.571610       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.571942       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.572364       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.572742       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.573170       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.573407       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.573654       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.573967       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.574340       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0702 15:28:19.574380       1 reflector.go:340] runtime/asm_amd64.s:1357: watch of *v1alpha1.ImageContentSourcePolicy ended with: very short watch: runtime/asm_amd64.s:1357: Unexpected watch close - watch lasted less than a second and no items received\nI0702 15:28:19.574681       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.575018       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.575302       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.575566       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.582830       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0702 15:28:19.584922       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jul 02 15:29:15.359 E ns/openshift-monitoring pod/kube-state-metrics-79974757b6-q58xp node/ip-10-0-189-221.us-east-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Jul 02 15:29:17.216 E ns/openshift-monitoring pod/node-exporter-z6dkg node/ip-10-0-154-159.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T15:08:49Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T15:08:49Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 15:29:18.386 E ns/openshift-monitoring pod/prometheus-adapter-5d65989486-lshmm node/ip-10-0-189-221.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 15:17:05.493559       1 adapter.go:93] successfully using in-cluster auth\nI0702 15:17:06.195572       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 15:29:22.815 E ns/openshift-monitoring pod/node-exporter-wx5x5 node/ip-10-0-140-158.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T15:12:23Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T15:12:23Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 15:29:31.851 E ns/openshift-monitoring pod/thanos-querier-7c579fb7b4-x9xc5 node/ip-10-0-140-158.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 02 15:17:05 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0702 15:17:05.737061       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:17:05 http.go:107: HTTPS: listening on [::]:9091\n2020/07/02 15:18:03 server.go:3055: http: TLS handshake error from 10.129.2.6:47968: read tcp 10.131.0.13:9091->10.129.2.6:47968: read: connection reset by peer\n2020/07/02 15:18:20 oauthproxy.go:774: basicauth: 10.130.0.6:56822 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:19:20 oauthproxy.go:774: basicauth: 10.130.0.6:57586 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:20:20 oauthproxy.go:774: basicauth: 10.130.0.6:58984 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:21:20 oauthproxy.go:774: basicauth: 10.130.0.6:59618 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:23:20 oauthproxy.go:774: basicauth: 10.130.0.6:60770 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:25:07 oauthproxy.go:774: basicauth: 10.128.0.43:54436 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:25:07 oauthproxy.go:774: basicauth: 10.128.0.43:54436 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:28:06 oauthproxy.go:774: basicauth: 10.128.0.43:35742 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:28:06 oauthproxy.go:774: basicauth: 10.128.0.43:35742 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:29:06 oauthproxy.go:774: basicauth: 10.128.0.43:36898 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:29:06 oauthproxy.go:774: basicauth: 10.128.0.43:36898 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 02 15:29:32.841 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-console/downloads is progressing ReplicaSetUpdated: ReplicaSet "downloads-7bbf87fc7b" is progressing.
Jul 02 15:29:38.545 E ns/openshift-monitoring pod/prometheus-adapter-5d65989486-5lslm node/ip-10-0-237-60.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 15:17:05.576902       1 adapter.go:93] successfully using in-cluster auth\nI0702 15:17:05.916366       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 15:29:44.775 E ns/openshift-monitoring pod/node-exporter-hsphv node/ip-10-0-189-221.us-east-2.compute.internal container=node-exporter container exited with code 143 (Error): -02T15:12:27Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-02T15:12:27Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 02 15:29:47.594 E ns/openshift-monitoring pod/thanos-querier-7c579fb7b4-rshmx node/ip-10-0-237-60.us-east-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 5:17:01 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 15:17:01 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 15:17:01 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 15:17:01 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:17:01 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/02 15:17:01 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:17:01 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/02 15:17:01 http.go:107: HTTPS: listening on [::]:9091\nI0702 15:17:01.702603       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:17:20 oauthproxy.go:774: basicauth: 10.130.0.6:56060 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:22:20 oauthproxy.go:774: basicauth: 10.130.0.6:60234 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:24:08 oauthproxy.go:774: basicauth: 10.128.0.43:49232 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:26:07 oauthproxy.go:774: basicauth: 10.128.0.43:34110 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:26:07 oauthproxy.go:774: basicauth: 10.128.0.43:34110 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:27:06 oauthproxy.go:774: basicauth: 10.128.0.43:35080 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:27:06 oauthproxy.go:774: basicauth: 10.128.0.43:35080 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 02 15:29:48.615 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-237-60.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:29:43.617Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:29:43.624Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:29:43.625Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:29:43.626Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:29:43.626Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:29:43.626Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:29:43.626Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:29:43.626Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:29:43.626Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:29:43.626Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:29:43.626Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:29:43.626Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:29:43.626Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:29:43.626Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:29:43.627Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:29:43.627Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:29:53.903 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-158.us-east-2.compute.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/07/02 15:17:28 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/07/02 15:18:39 config map updated\n2020/07/02 15:18:39 successfully triggered reload\n
Jul 02 15:29:53.903 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-158.us-east-2.compute.internal container=prometheus-proxy container exited with code 2 (Error): 2020/07/02 15:17:28 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 15:17:28 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 15:17:28 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 15:17:29 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/02 15:17:29 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:17:29 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/07/02 15:17:29 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:17:29 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/02 15:17:29 http.go:107: HTTPS: listening on [::]:9091\nI0702 15:17:29.007429       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:21:30 oauthproxy.go:774: basicauth: 10.129.2.10:33040 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:24:51 oauthproxy.go:774: basicauth: 10.128.0.20:41702 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:26:01 oauthproxy.go:774: basicauth: 10.129.2.10:37940 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:29:02 oauthproxy.go:774: basicauth: 10.129.0.58:59210 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/02 15:29:1
Jul 02 15:29:53.903 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-140-158.us-east-2.compute.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-07-02T15:17:28.239043506Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-07-02T15:17:28.239159882Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-07-02T15:17:28.240889198Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-07-02T15:17:33.385200231Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jul 02 15:29:54.906 E ns/openshift-marketplace pod/redhat-marketplace-fcd8cc8cb-qjl2h node/ip-10-0-140-158.us-east-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Jul 02 15:30:15.682 E ns/openshift-marketplace pod/community-operators-659b555d67-hlc44 node/ip-10-0-237-60.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 02 15:30:19.024 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-189-221.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:30:13.886Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:30:13.894Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:30:13.895Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:30:13.896Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:30:13.896Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:30:13.896Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:30:13.896Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:30:13.896Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:30:13.896Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:30:13.896Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:30:13.896Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:30:13.896Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:30:13.896Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:30:13.896Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:30:13.900Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:30:13.900Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:30:19.726 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-154-159.us-east-2.compute.internal node/ip-10-0-154-159.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): ods)\nE0702 15:30:17.778072       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)\nE0702 15:30:17.778112       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)\nE0702 15:30:17.778158       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)\nE0702 15:30:17.778197       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)\nE0702 15:30:17.778232       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)\nE0702 15:30:17.778272       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)\nE0702 15:30:17.778305       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: unknown (get nodes)\nE0702 15:30:17.778338       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)\nE0702 15:30:17.778364       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)\nE0702 15:30:17.839909       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0702 15:30:17.844286       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: unknown (get configmaps)\nE0702 15:30:18.888431       1 cache.go:444] Pod a318edbd-b736-41fc-9871-f6b419098023 updated on a different node than previously added to.\nF0702 15:30:18.888458       1 cache.go:445] Schedulercache is corrupted and can badly affect scheduling decisions\n
Jul 02 15:30:36.536 E ns/openshift-console-operator pod/console-operator-556cb5b8db-sx9db node/ip-10-0-160-31.us-east-2.compute.internal container=console-operator container exited with code 255 (Error): 1.248287       1 reflector.go:326] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 95; INTERNAL_ERROR") has prevented the request from succeeding\nW0702 15:26:03.195040       1 reflector.go:326] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 563; INTERNAL_ERROR") has prevented the request from succeeding\nW0702 15:26:03.199022       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 53; INTERNAL_ERROR") has prevented the request from succeeding\nI0702 15:30:35.736468       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:30:35.737626       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0702 15:30:35.739200       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0702 15:30:35.739213       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0702 15:30:35.739220       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0702 15:30:35.739227       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0702 15:30:35.739234       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0702 15:30:35.739245       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0702 15:30:35.739258       1 controller.go:70] Shutting down Console\nI0702 15:30:35.739267       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nF0702 15:30:35.739493       1 builder.go:243] stopped\n
Jul 02 15:31:21.388 E ns/openshift-console pod/console-7d8d5fccff-nkdv9 node/ip-10-0-204-1.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-07-02T15:17:41Z cmd/main: cookies are secure!\n2020-07-02T15:17:42Z cmd/main: Binding to [::]:8443...\n2020-07-02T15:17:42Z cmd/main: using TLS\n
Jul 02 15:31:48.136 E ns/openshift-sdn pod/sdn-controller-qztdz node/ip-10-0-154-159.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0702 15:03:23.530966       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0702 15:07:13.750023       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-4ztwk55l-b526d.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Jul 02 15:31:55.876 E ns/openshift-sdn pod/sdn-controller-b8nfm node/ip-10-0-160-31.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0702 15:03:18.838395       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0702 15:07:13.738811       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-4ztwk55l-b526d.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\nE0702 15:07:59.847356       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-4ztwk55l-b526d.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.162.124:6443: i/o timeout\n
Jul 02 15:32:01.488 E kube-apiserver Kube API started failing: Get https://api.ci-op-4ztwk55l-b526d.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
Jul 02 15:32:12.963 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-160-31.us-east-2.compute.internal node/ip-10-0-160-31.us-east-2.compute.internal container=kube-scheduler container exited with code 255 (Error): *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30229&timeoutSeconds=356&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:32:12.603991       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=23552&timeout=8m36s&timeoutSeconds=516&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:32:12.605614       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30186&timeout=7m28s&timeoutSeconds=448&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:32:12.606803       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=28058&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:32:12.607900       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=23552&timeout=8m14s&timeoutSeconds=494&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:32:12.609034       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=23971&timeout=9m39s&timeoutSeconds=579&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:32:12.776043       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0702 15:32:12.776078       1 server.go:257] leaderelection lost\n
Jul 02 15:32:16.624 E ns/openshift-multus pod/multus-admission-controller-76c7h node/ip-10-0-204-1.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 02 15:32:22.644 E ns/openshift-sdn pod/sdn-controller-6qb7d node/ip-10-0-204-1.us-east-2.compute.internal container=sdn-controller container exited with code 2 (Error): 0.0.140.158", subnet: "10.131.0.0/23")\nI0702 15:12:00.802236       1 subnets.go:149] Created HostSubnet ip-10-0-189-221.us-east-2.compute.internal (host: "ip-10-0-189-221.us-east-2.compute.internal", ip: "10.0.189.221", subnet: "10.128.2.0/23")\nI0702 15:12:02.663544       1 subnets.go:149] Created HostSubnet ip-10-0-237-60.us-east-2.compute.internal (host: "ip-10-0-237-60.us-east-2.compute.internal", ip: "10.0.237.60", subnet: "10.129.2.0/23")\nI0702 15:19:08.180889       1 vnids.go:115] Allocated netid 1782226 for namespace "e2e-frontend-ingress-available-3246"\nI0702 15:19:08.191522       1 vnids.go:115] Allocated netid 14304442 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-372"\nI0702 15:19:08.200234       1 vnids.go:115] Allocated netid 16101599 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-7716"\nI0702 15:19:08.235364       1 vnids.go:115] Allocated netid 8964402 for namespace "e2e-openshift-api-available-3872"\nI0702 15:19:08.256610       1 vnids.go:115] Allocated netid 3556817 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-4249"\nI0702 15:19:08.272859       1 vnids.go:115] Allocated netid 3279884 for namespace "e2e-k8s-service-lb-available-117"\nI0702 15:19:08.294375       1 vnids.go:115] Allocated netid 15855754 for namespace "e2e-kubernetes-api-available-5015"\nI0702 15:19:08.306126       1 vnids.go:115] Allocated netid 5281427 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-6985"\nI0702 15:19:08.324898       1 vnids.go:115] Allocated netid 14874858 for namespace "e2e-k8s-sig-apps-deployment-upgrade-8189"\nI0702 15:19:08.357214       1 vnids.go:115] Allocated netid 11618794 for namespace "e2e-k8s-sig-apps-job-upgrade-1325"\nW0702 15:28:04.402862       1 reflector.go:326] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: watch of *v1.HostSubnet ended with: very short watch: github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Unexpected watch close - watch lasted less than a second and no items received\n
Jul 02 15:32:25.684 E ns/openshift-sdn pod/sdn-pv2fm node/ip-10-0-204-1.us-east-2.compute.internal container=sdn container exited with code 255 (Error): tch/db.sock: database connection failed (No such file or directory)\nW0702 15:32:23.104690    1831 pod.go:274] CNI_ADD openshift-multus/multus-admission-controller-z96lj failed: exit status 1\nI0702 15:32:23.114071    1831 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0702 15:32:23.116747    1831 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-z96lj\nI0702 15:32:23.175440    1831 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0702 15:32:23.179818    1831 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-z96lj\nI0702 15:32:23.902391    1831 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0702 15:32:23.906618    1831 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nW0702 15:32:23.909761    1831 pod.go:274] CNI_ADD openshift-multus/multus-admission-controller-z96lj failed: exit status 1\nI0702 15:32:23.917630    1831 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0702 15:32:23.920212    1831 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-z96lj\nI0702 15:32:23.976802    1831 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0702 15:32:23.980765    1831 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-z96lj\nI0702 15:32:25.064626    1831 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 15:32:25.064764    1831 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 15:32:38.049 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-160-31.us-east-2.compute.internal node/ip-10-0-160-31.us-east-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 02 15:32:39.083 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-31.us-east-2.compute.internal node/ip-10-0-160-31.us-east-2.compute.internal container=kube-controller-manager-recovery-controller container exited with code 255 (Error): ed/secrets?allowWatchBookmarks=true&resourceVersion=27662&timeout=8m3s&timeoutSeconds=483&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:32:37.604818       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-config/configmaps?allowWatchBookmarks=true&resourceVersion=24391&timeout=9m37s&timeoutSeconds=577&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:32:37.622879       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?allowWatchBookmarks=true&resourceVersion=30204&timeout=9m32s&timeoutSeconds=572&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:32:37.642229       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?allowWatchBookmarks=true&resourceVersion=27662&timeout=9m45s&timeoutSeconds=585&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:32:37.643358       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *v1.Secret: Get https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?allowWatchBookmarks=true&resourceVersion=23192&timeout=5m34s&timeoutSeconds=334&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0702 15:32:37.644472       1 reflector.go:307] runtime/asm_amd64.s:1357: Failed to watch *unstructured.Unstructured: Get https://localhost:6443/apis/operator.openshift.io/v1/kubecontrollermanagers?allowWatchBookmarks=true&resourceVersion=28802&timeoutSeconds=393&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0702 15:32:38.554011       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\nF0702 15:32:38.554185       1 leaderelection.go:67] leaderelection lost\n
Jul 02 15:32:48.290 E ns/openshift-sdn pod/sdn-lxq9r node/ip-10-0-140-158.us-east-2.compute.internal container=sdn container exited with code 255 (Error): t-kube-scheduler/scheduler:https to [10.0.154.159:10259 10.0.204.1:10259]\nI0702 15:32:12.966954    2225 roundrobin.go:217] Delete endpoint 10.0.160.31:10259 for service "openshift-kube-scheduler/scheduler:https"\nI0702 15:32:13.113500    2225 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:32:13.113527    2225 proxier.go:347] userspace syncProxyRules took 34.94527ms\nI0702 15:32:39.085816    2225 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.154.159:10257 10.0.204.1:10257]\nI0702 15:32:39.085857    2225 roundrobin.go:217] Delete endpoint 10.0.160.31:10257 for service "openshift-kube-controller-manager/kube-controller-manager:https"\nI0702 15:32:39.219860    2225 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:32:39.219887    2225 proxier.go:347] userspace syncProxyRules took 28.364958ms\nI0702 15:32:40.074042    2225 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.154.159:10257 10.0.160.31:10257 10.0.204.1:10257]\nI0702 15:32:40.203830    2225 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:32:40.203852    2225 proxier.go:347] userspace syncProxyRules took 27.401133ms\nI0702 15:32:43.250789    2225 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nE0702 15:32:43.250823    2225 metrics.go:133] failed to dump OVS flows for metrics: exit status 1\nI0702 15:32:44.373419    2225 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0702 15:32:47.512501    2225 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 15:32:47.512562    2225 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 15:33:13.320 E ns/openshift-sdn pod/sdn-7ccbg node/ip-10-0-160-31.us-east-2.compute.internal container=sdn container exited with code 255 (Error): ller:webhook"\nI0702 15:32:52.003029   85244 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.4:8443 10.129.0.73:8443]\nI0702 15:32:52.003072   85244 roundrobin.go:217] Delete endpoint 10.130.0.15:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 15:32:52.145069   85244 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:32:52.145095   85244 proxier.go:347] userspace syncProxyRules took 30.713759ms\nI0702 15:32:52.308400   85244 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:32:52.308435   85244 proxier.go:347] userspace syncProxyRules took 38.627835ms\nI0702 15:33:00.663115   85244 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.154.159:10259 10.0.160.31:10259 10.0.204.1:10259]\nI0702 15:33:00.904819   85244 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:33:00.904846   85244 proxier.go:347] userspace syncProxyRules took 52.273091ms\nI0702 15:33:03.134432   85244 roundrobin.go:295] LoadBalancerRR: Removing endpoints for openshift-cluster-version/cluster-version-operator:metrics\nI0702 15:33:03.329781   85244 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:33:03.329814   85244 proxier.go:347] userspace syncProxyRules took 36.286633ms\nI0702 15:33:04.160942   85244 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-cluster-version/cluster-version-operator:metrics to [10.0.160.31:9099]\nI0702 15:33:04.311328   85244 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:33:04.311355   85244 proxier.go:347] userspace syncProxyRules took 33.728685ms\nI0702 15:33:13.033749   85244 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 15:33:13.033809   85244 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 15:33:39.480 E ns/openshift-sdn pod/sdn-z7n64 node/ip-10-0-189-221.us-east-2.compute.internal container=sdn container exited with code 255 (Error): 15:33:09.327301   68258 cmd.go:173] openshift-sdn network plugin registering startup\nI0702 15:33:09.327448   68258 cmd.go:177] openshift-sdn network plugin ready\nI0702 15:33:38.644245   68258 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.4:6443 10.129.0.73:6443 10.130.0.82:6443]\nI0702 15:33:38.644296   68258 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.4:8443 10.129.0.73:8443 10.130.0.82:8443]\nI0702 15:33:38.655169   68258 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.73:6443 10.130.0.82:6443]\nI0702 15:33:38.655201   68258 roundrobin.go:217] Delete endpoint 10.128.0.4:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 15:33:38.655222   68258 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.73:8443 10.130.0.82:8443]\nI0702 15:33:38.655237   68258 roundrobin.go:217] Delete endpoint 10.128.0.4:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 15:33:38.826766   68258 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:33:38.826793   68258 proxier.go:347] userspace syncProxyRules took 36.80122ms\nI0702 15:33:38.888647   68258 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: no such file or directory\nI0702 15:33:38.981498   68258 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:33:38.981531   68258 proxier.go:347] userspace syncProxyRules took 36.594029ms\nI0702 15:33:39.240351   68258 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 15:33:39.240396   68258 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 15:33:59.741 E ns/openshift-sdn pod/sdn-6rp2z node/ip-10-0-154-159.us-east-2.compute.internal container=sdn container exited with code 255 (Error): y services and endpoints initialized\nI0702 15:33:33.950423   96355 cmd.go:173] openshift-sdn network plugin registering startup\nI0702 15:33:33.950562   96355 cmd.go:177] openshift-sdn network plugin ready\nI0702 15:33:34.653374   96355 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-r6vgw got IP 10.130.0.82, ofport 83\nI0702 15:33:38.645300   96355 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.4:6443 10.129.0.73:6443 10.130.0.82:6443]\nI0702 15:33:38.645343   96355 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.4:8443 10.129.0.73:8443 10.130.0.82:8443]\nI0702 15:33:38.657918   96355 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.73:6443 10.130.0.82:6443]\nI0702 15:33:38.657968   96355 roundrobin.go:217] Delete endpoint 10.128.0.4:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0702 15:33:38.658025   96355 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.73:8443 10.130.0.82:8443]\nI0702 15:33:38.658050   96355 roundrobin.go:217] Delete endpoint 10.128.0.4:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0702 15:33:38.796383   96355 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:33:38.796407   96355 proxier.go:347] userspace syncProxyRules took 32.610513ms\nI0702 15:33:39.038775   96355 proxier.go:368] userspace proxy: processing 0 service events\nI0702 15:33:39.038950   96355 proxier.go:347] userspace syncProxyRules took 49.958115ms\nI0702 15:33:58.836985   96355 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0702 15:33:58.837038   96355 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 02 15:34:06.524 E ns/openshift-multus pod/multus-7frjb node/ip-10-0-160-31.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 15:34:09.515 E ns/openshift-multus pod/multus-admission-controller-v47ft node/ip-10-0-160-31.us-east-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 02 15:34:56.714 E ns/openshift-multus pod/multus-lfcf6 node/ip-10-0-189-221.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 15:35:52.649 E ns/openshift-multus pod/multus-qqqd7 node/ip-10-0-204-1.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 15:36:44.603 E ns/openshift-multus pod/multus-gx64x node/ip-10-0-237-60.us-east-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 02 15:39:12.732 E ns/openshift-machine-config-operator pod/machine-config-daemon-b552r node/ip-10-0-140-158.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:39:17.347 E ns/openshift-machine-config-operator pod/machine-config-daemon-pzxbr node/ip-10-0-189-221.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:39:22.017 E ns/openshift-machine-config-operator pod/machine-config-daemon-9tr8q node/ip-10-0-154-159.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:39:57.016 E ns/openshift-machine-config-operator pod/machine-config-daemon-l7ktr node/ip-10-0-237-60.us-east-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 02 15:40:08.200 E ns/openshift-machine-config-operator pod/machine-config-controller-79bd7f6957-lm852 node/ip-10-0-154-159.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): 751a\nI0702 15:13:11.983001       1 node_controller.go:452] Pool worker: node ip-10-0-189-221.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-d9b759edb325d7b505b82322c5f7751a\nI0702 15:13:11.983011       1 node_controller.go:452] Pool worker: node ip-10-0-189-221.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0702 15:13:13.797848       1 node_controller.go:452] Pool worker: node ip-10-0-237-60.us-east-2.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-d9b759edb325d7b505b82322c5f7751a\nI0702 15:13:13.797886       1 node_controller.go:452] Pool worker: node ip-10-0-237-60.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-d9b759edb325d7b505b82322c5f7751a\nI0702 15:13:13.797897       1 node_controller.go:452] Pool worker: node ip-10-0-237-60.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0702 15:16:44.004295       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0702 15:16:44.390522       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0702 15:19:50.721248       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0702 15:19:50.760421       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0702 15:24:43.309389       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0702 15:24:43.472145       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\nI0702 15:32:04.333943       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool master\nI0702 15:32:04.554068       1 container_runtime_config_controller.go:714] Applied ImageConfig cluster on MachineConfigPool worker\n
Jul 02 15:41:39.518 E ns/openshift-machine-config-operator pod/machine-config-server-bcw77 node/ip-10-0-154-159.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0702 15:04:41.355797       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0702 15:04:41.357221       1 api.go:56] Launching server on :22624\nI0702 15:04:41.357344       1 api.go:56] Launching server on :22623\nI0702 15:09:36.273839       1 api.go:102] Pool worker requested by 10.0.212.40:38562\nI0702 15:09:37.631042       1 api.go:102] Pool worker requested by 10.0.162.124:3352\n
Jul 02 15:41:43.013 E ns/openshift-machine-config-operator pod/machine-config-server-czqtz node/ip-10-0-204-1.us-east-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0702 15:04:39.589384       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-4-g716cfce9-dirty (716cfce99c3b38375fbc22f49d83b202cfcb3d50)\nI0702 15:04:39.590490       1 api.go:56] Launching server on :22624\nI0702 15:04:39.590541       1 api.go:56] Launching server on :22623\n
Jul 02 15:41:50.370 E ns/openshift-monitoring pod/openshift-state-metrics-5655589447-qmsr4 node/ip-10-0-237-60.us-east-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jul 02 15:41:50.422 E ns/openshift-monitoring pod/prometheus-adapter-8b8cf5789-w5xpk node/ip-10-0-237-60.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 15:29:36.991710       1 adapter.go:93] successfully using in-cluster auth\nI0702 15:29:38.020379       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 15:41:50.630 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-7f997wl2b node/ip-10-0-154-159.us-east-2.compute.internal container=operator container exited with code 255 (Error): or.go:105: forcing resync\nI0702 15:39:03.017202       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0702 15:39:03.018473       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0702 15:39:03.034512       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0702 15:39:03.035479       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0702 15:39:19.807670       1 httplog.go:90] GET /metrics: (7.107947ms) 200 [Prometheus/2.15.2 10.128.2.34:51072]\nI0702 15:39:23.404867       1 httplog.go:90] GET /metrics: (2.122553ms) 200 [Prometheus/2.15.2 10.129.2.30:34858]\nI0702 15:39:49.807737       1 httplog.go:90] GET /metrics: (7.205878ms) 200 [Prometheus/2.15.2 10.128.2.34:51072]\nI0702 15:39:53.405130       1 httplog.go:90] GET /metrics: (2.39469ms) 200 [Prometheus/2.15.2 10.129.2.30:34858]\nI0702 15:40:19.808421       1 httplog.go:90] GET /metrics: (7.77433ms) 200 [Prometheus/2.15.2 10.128.2.34:51072]\nI0702 15:40:23.404739       1 httplog.go:90] GET /metrics: (1.951329ms) 200 [Prometheus/2.15.2 10.129.2.30:34858]\nI0702 15:40:31.997141       1 reflector.go:418] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: Watch close - *v1.ConfigMap total 1 items received\nI0702 15:40:49.807770       1 httplog.go:90] GET /metrics: (7.150909ms) 200 [Prometheus/2.15.2 10.128.2.34:51072]\nI0702 15:40:53.404842       1 httplog.go:90] GET /metrics: (2.053891ms) 200 [Prometheus/2.15.2 10.129.2.30:34858]\nI0702 15:41:19.814206       1 httplog.go:90] GET /metrics: (13.537074ms) 200 [Prometheus/2.15.2 10.128.2.34:51072]\nI0702 15:41:23.404900       1 httplog.go:90] GET /metrics: (2.116896ms) 200 [Prometheus/2.15.2 10.129.2.30:34858]\nI0702 15:41:49.500419       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0702 15:41:49.501100       1 operator.go:227] Shutting down ServiceCatalogControllerManagerOperator\nF0702 15:41:49.501132       1 builder.go:243] stopped\n
Jul 02 15:41:56.786 E ns/openshift-machine-api pod/machine-api-operator-69bd8c7d75-7qxng node/ip-10-0-154-159.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Jul 02 15:41:57.795 E ns/openshift-machine-api pod/machine-api-controllers-5748b9b969-jwrdj node/ip-10-0-154-159.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 02 15:41:57.840 E ns/openshift-machine-config-operator pod/machine-config-controller-569fbdc87d-l4z29 node/ip-10-0-154-159.us-east-2.compute.internal container=machine-config-controller container exited with code 2 (Error): tion.openshift.io/v1  } {MachineConfig  99-worker-17f98fd4-6f6f-49d3-8674-db6cc8b9262a-registries  machineconfiguration.openshift.io/v1  } {MachineConfig  99-worker-ssh  machineconfiguration.openshift.io/v1  }]\nI0702 15:41:42.543795       1 render_controller.go:516] Pool worker: now targeting: rendered-worker-66268e4ee3a7873bda767e7b71068a6b\nI0702 15:41:42.545019       1 render_controller.go:516] Pool master: now targeting: rendered-master-3192b815ed9a1bd8ffd3c7d1f93d1f70\nI0702 15:41:47.545587       1 node_controller.go:758] Setting node ip-10-0-154-159.us-east-2.compute.internal to desired config rendered-master-3192b815ed9a1bd8ffd3c7d1f93d1f70\nI0702 15:41:47.546104       1 node_controller.go:758] Setting node ip-10-0-237-60.us-east-2.compute.internal to desired config rendered-worker-66268e4ee3a7873bda767e7b71068a6b\nI0702 15:41:47.584985       1 node_controller.go:452] Pool worker: node ip-10-0-237-60.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-66268e4ee3a7873bda767e7b71068a6b\nI0702 15:41:47.585895       1 node_controller.go:452] Pool master: node ip-10-0-154-159.us-east-2.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-master-3192b815ed9a1bd8ffd3c7d1f93d1f70\nI0702 15:41:48.580919       1 node_controller.go:452] Pool worker: node ip-10-0-237-60.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0702 15:41:48.599262       1 node_controller.go:433] Pool worker: node ip-10-0-237-60.us-east-2.compute.internal is now reporting unready: node ip-10-0-237-60.us-east-2.compute.internal is reporting Unschedulable\nI0702 15:41:48.605515       1 node_controller.go:452] Pool master: node ip-10-0-154-159.us-east-2.compute.internal changed machineconfiguration.openshift.io/state = Working\nI0702 15:41:48.630469       1 node_controller.go:433] Pool master: node ip-10-0-154-159.us-east-2.compute.internal is now reporting unready: node ip-10-0-154-159.us-east-2.compute.internal is reporting Unschedulable\n
Jul 02 15:42:14.295 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-140-158.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:42:08.252Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:42:08.255Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:42:08.255Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:42:08.256Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:42:08.256Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:42:08.256Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:42:08.256Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:42:08.256Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:42:08.256Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:42:08.256Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:42:08.256Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:42:08.256Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:42:08.256Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:42:08.256Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:42:08.257Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:42:08.257Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:42:18.481 E kube-apiserver Kube API started failing: Get https://api.ci-op-4ztwk55l-b526d.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Jul 02 15:42:20.589 E ns/openshift-machine-api pod/machine-api-controllers-5748b9b969-pkwh7 node/ip-10-0-204-1.us-east-2.compute.internal container=nodelink-controller container exited with code 255 (Error): 
Jul 02 15:42:20.589 E ns/openshift-machine-api pod/machine-api-controllers-5748b9b969-pkwh7 node/ip-10-0-204-1.us-east-2.compute.internal container=machine-healthcheck-controller container exited with code 255 (Error): 
Jul 02 15:44:23.415 E ns/openshift-cluster-node-tuning-operator pod/tuned-sgctb node/ip-10-0-237-60.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:23.423 E ns/openshift-monitoring pod/node-exporter-6mbsq node/ip-10-0-237-60.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:23.440 E ns/openshift-image-registry pod/node-ca-45ngc node/ip-10-0-237-60.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:23.450 E ns/openshift-sdn pod/ovs-2q8n5 node/ip-10-0-237-60.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:23.476 E ns/openshift-sdn pod/sdn-m4b4l node/ip-10-0-237-60.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:23.491 E ns/openshift-multus pod/multus-xvfb2 node/ip-10-0-237-60.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:23.511 E ns/openshift-dns pod/dns-default-sfpnh node/ip-10-0-237-60.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:23.533 E ns/openshift-machine-config-operator pod/machine-config-daemon-zgp4t node/ip-10-0-237-60.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:26.764 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Jul 02 15:44:26.906 E ns/openshift-monitoring pod/node-exporter-wvw79 node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:26.917 E ns/openshift-cluster-node-tuning-operator pod/tuned-dz5rj node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:26.935 E ns/openshift-controller-manager pod/controller-manager-s8fkf node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:27.006 E ns/openshift-image-registry pod/node-ca-wzsgm node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:27.017 E ns/openshift-sdn pod/sdn-controller-t7rwh node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:27.032 E ns/openshift-multus pod/multus-jbs86 node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:27.042 E ns/openshift-multus pod/multus-admission-controller-r6vgw node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:27.068 E ns/openshift-sdn pod/ovs-ftzx2 node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:27.100 E ns/openshift-dns pod/dns-default-jmvkh node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:27.113 E ns/openshift-machine-config-operator pod/machine-config-server-p4s7s node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:27.125 E ns/openshift-machine-config-operator pod/machine-config-daemon-f6d9s node/ip-10-0-154-159.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:44:31.555 E ns/openshift-machine-config-operator pod/machine-config-daemon-zgp4t node/ip-10-0-237-60.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:44:36.723 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jul 02 15:44:38.265 E ns/openshift-machine-config-operator pod/machine-config-daemon-f6d9s node/ip-10-0-154-159.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:44:42.356 E ns/openshift-marketplace pod/community-operators-575cc8d579-zjm6d node/ip-10-0-189-221.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 02 15:44:42.405 E ns/openshift-monitoring pod/grafana-549448bd9d-sw96t node/ip-10-0-189-221.us-east-2.compute.internal container=grafana container exited with code 1 (Error): 
Jul 02 15:44:42.450 E ns/openshift-monitoring pod/prometheus-adapter-8b8cf5789-45b9x node/ip-10-0-189-221.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 15:29:17.366820       1 adapter.go:93] successfully using in-cluster auth\nI0702 15:29:19.003666       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 15:44:43.478 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-7c55789bd9-m5cjb node/ip-10-0-189-221.us-east-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Jul 02 15:44:43.520 E ns/openshift-marketplace pod/certified-operators-558474559f-w5666 node/ip-10-0-189-221.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 02 15:44:43.551 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-189-221.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/02 15:29:32 Watching directory: "/etc/alertmanager/config"\n
Jul 02 15:44:43.551 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-189-221.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/02 15:29:33 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 15:29:33 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 15:29:33 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 15:29:33 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/02 15:29:33 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:29:33 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 15:29:33 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/02 15:29:33 http.go:107: HTTPS: listening on [::]:9095\nI0702 15:29:33.063838       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jul 02 15:44:54.008 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-237-60.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:44:51.090Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:44:51.100Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:44:51.102Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:44:51.103Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:44:51.103Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:44:51.103Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:44:51.103Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:44:51.103Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:44:51.103Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:44:51.103Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:44:51.103Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:44:51.103Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:44:51.103Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:44:51.103Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:44:51.103Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:44:51.103Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:44:54.196 E ns/openshift-service-ca-operator pod/service-ca-operator-866b7f54b4-dxn2l node/ip-10-0-204-1.us-east-2.compute.internal container=operator container exited with code 255 (Error): 
Jul 02 15:44:56.972 E ns/openshift-cluster-machine-approver pod/machine-approver-5c7fc6d7c4-pvcb5 node/ip-10-0-204-1.us-east-2.compute.internal container=machine-approver-controller container exited with code 2 (Error): 5:28:57.349156       1 config.go:23] machine approver config: {NodeClientCert:{Disabled:false}}\nI0702 15:28:57.349326       1 main.go:236] Starting Machine Approver\nI0702 15:28:57.450096       1 main.go:146] CSR csr-dr8l2 added\nI0702 15:28:57.450133       1 main.go:149] CSR csr-dr8l2 is already approved\nI0702 15:28:57.450155       1 main.go:146] CSR csr-n9kg6 added\nI0702 15:28:57.450164       1 main.go:149] CSR csr-n9kg6 is already approved\nI0702 15:28:57.450177       1 main.go:146] CSR csr-ncs2z added\nI0702 15:28:57.450187       1 main.go:149] CSR csr-ncs2z is already approved\nI0702 15:28:57.450200       1 main.go:146] CSR csr-tw7ps added\nI0702 15:28:57.450211       1 main.go:149] CSR csr-tw7ps is already approved\nI0702 15:28:57.450224       1 main.go:146] CSR csr-x6ghw added\nI0702 15:28:57.450235       1 main.go:149] CSR csr-x6ghw is already approved\nI0702 15:28:57.450248       1 main.go:146] CSR csr-rj9k7 added\nI0702 15:28:57.450258       1 main.go:149] CSR csr-rj9k7 is already approved\nI0702 15:28:57.450278       1 main.go:146] CSR csr-4frk7 added\nI0702 15:28:57.450289       1 main.go:149] CSR csr-4frk7 is already approved\nI0702 15:28:57.450310       1 main.go:146] CSR csr-95qdv added\nI0702 15:28:57.450346       1 main.go:149] CSR csr-95qdv is already approved\nI0702 15:28:57.450363       1 main.go:146] CSR csr-9z8hq added\nI0702 15:28:57.450374       1 main.go:149] CSR csr-9z8hq is already approved\nI0702 15:28:57.450388       1 main.go:146] CSR csr-jfzr8 added\nI0702 15:28:57.450399       1 main.go:149] CSR csr-jfzr8 is already approved\nI0702 15:28:57.450412       1 main.go:146] CSR csr-jzrdc added\nI0702 15:28:57.450423       1 main.go:149] CSR csr-jzrdc is already approved\nI0702 15:28:57.450435       1 main.go:146] CSR csr-l2jk4 added\nI0702 15:28:57.450445       1 main.go:149] CSR csr-l2jk4 is already approved\nW0702 15:42:13.992963       1 reflector.go:289] github.com/openshift/cluster-machine-approver/main.go:238: watch of *v1beta1.CertificateSigningRequest ended with: too old resource version: 24346 (35368)\n
Jul 02 15:44:58.662 E ns/openshift-machine-config-operator pod/machine-config-operator-687c5c47c9-x9pw7 node/ip-10-0-204-1.us-east-2.compute.internal container=machine-config-operator container exited with code 2 (Error): on refused\nE0702 15:42:13.100169       1 reflector.go:307] k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117: Failed to watch *v1beta1.CustomResourceDefinition: Get https://172.30.0.1:443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions?allowWatchBookmarks=true&labelSelector=openshift.io%2Foperator-managed%3D&resourceVersion=29238&timeout=5m47s&timeoutSeconds=347&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0702 15:42:13.121853       1 reflector.go:307] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.ControllerConfig: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?allowWatchBookmarks=true&resourceVersion=34448&timeout=5m23s&timeoutSeconds=323&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0702 15:42:13.123830       1 reflector.go:307] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to watch *v1.MachineConfigPool: Get https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools?allowWatchBookmarks=true&resourceVersion=34829&timeout=5m17s&timeoutSeconds=317&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0702 15:42:13.156088       1 reflector.go:307] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Failed to watch *v1.Proxy: Get https://172.30.0.1:443/apis/config.openshift.io/v1/proxies?allowWatchBookmarks=true&resourceVersion=30240&timeout=6m23s&timeoutSeconds=383&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nE0702 15:42:13.201023       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts?allowWatchBookmarks=true&resourceVersion=29242&timeout=5m8s&timeoutSeconds=308&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\n
Jul 02 15:44:58.786 E ns/openshift-machine-api pod/machine-api-operator-69bd8c7d75-z6mnr node/ip-10-0-204-1.us-east-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Jul 02 15:44:59.102 E ns/openshift-machine-api pod/machine-api-controllers-5748b9b969-pkwh7 node/ip-10-0-204-1.us-east-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 02 15:44:59.102 E ns/openshift-machine-api pod/machine-api-controllers-5748b9b969-pkwh7 node/ip-10-0-204-1.us-east-2.compute.internal container=machine-healthcheck-controller container exited with code 255 (Error): 
Jul 02 15:44:59.102 E ns/openshift-machine-api pod/machine-api-controllers-5748b9b969-pkwh7 node/ip-10-0-204-1.us-east-2.compute.internal container=nodelink-controller container exited with code 255 (Error): 
Jul 02 15:45:15.560 E ns/openshift-console pod/console-7785dd4dcf-n86hv node/ip-10-0-204-1.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-07-02T15:30:58Z cmd/main: cookies are secure!\n2020-07-02T15:30:58Z cmd/main: Binding to [::]:8443...\n2020-07-02T15:30:58Z cmd/main: using TLS\n
Jul 02 15:45:48.080 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-159.us-east-2.compute.internal node/ip-10-0-154-159.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:45:46.652107       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:45:46.654842       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:45:46.657109       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 15:45:46.657132       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 15:45:46.657963       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:45:48.980 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: reconciling Grafana Dashboard Sources ConfigMap failed: updating ConfigMap object failed: Put https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/configmaps/grafana-dashboards: unexpected EOF
Jul 02 15:46:08.127 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-154-159.us-east-2.compute.internal node/ip-10-0-154-159.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:46:07.745920       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:46:07.747642       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:46:07.749363       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0702 15:46:07.749385       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0702 15:46:07.750362       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:47:14.637 E ns/openshift-marketplace pod/community-operators-575cc8d579-csc47 node/ip-10-0-237-60.us-east-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 02 15:47:17.646 E ns/openshift-marketplace pod/certified-operators-558474559f-9bw5g node/ip-10-0-237-60.us-east-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 02 15:47:22.443 E ns/openshift-image-registry pod/node-ca-6rz6c node/ip-10-0-189-221.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:22.458 E ns/openshift-cluster-node-tuning-operator pod/tuned-9rfvx node/ip-10-0-189-221.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:22.472 E ns/openshift-monitoring pod/node-exporter-pbkgn node/ip-10-0-189-221.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:22.497 E ns/openshift-sdn pod/ovs-4h5q2 node/ip-10-0-189-221.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:22.508 E ns/openshift-multus pod/multus-n75x2 node/ip-10-0-189-221.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:22.518 E ns/openshift-machine-config-operator pod/machine-config-daemon-tcg6m node/ip-10-0-189-221.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:22.532 E ns/openshift-dns pod/dns-default-t57c6 node/ip-10-0-189-221.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:30.278 E ns/openshift-machine-config-operator pod/machine-config-daemon-tcg6m node/ip-10-0-189-221.us-east-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 02 15:47:43.059 E ns/openshift-monitoring pod/prometheus-adapter-8b8cf5789-jtpwk node/ip-10-0-140-158.us-east-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0702 15:41:55.122097       1 adapter.go:93] successfully using in-cluster auth\nI0702 15:41:55.831981       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 02 15:47:43.072 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-158.us-east-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/02 15:29:20 Watching directory: "/etc/alertmanager/config"\n
Jul 02 15:47:43.072 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-140-158.us-east-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/02 15:29:21 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 15:29:21 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/02 15:29:21 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/02 15:29:22 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/02 15:29:22 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/02 15:29:22 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/02 15:29:22 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0702 15:29:22.028135       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/02 15:29:22 http.go:107: HTTPS: listening on [::]:9095\n
Jul 02 15:47:48.352 E ns/openshift-monitoring pod/node-exporter-plpd5 node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.365 E ns/openshift-controller-manager pod/controller-manager-slfgz node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.378 E ns/openshift-cluster-node-tuning-operator pod/tuned-zpjt6 node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.395 E ns/openshift-image-registry pod/node-ca-tr29r node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.419 E ns/openshift-multus pod/multus-admission-controller-z96lj node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.432 E ns/openshift-sdn pod/ovs-rxbcj node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.444 E ns/openshift-sdn pod/sdn-controller-blm7v node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.462 E ns/openshift-sdn pod/sdn-xqmbg node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.491 E ns/openshift-multus pod/multus-9kk7d node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.501 E ns/openshift-dns pod/dns-default-ckq9l node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.513 E ns/openshift-machine-config-operator pod/machine-config-daemon-5pjgc node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:47:48.526 E ns/openshift-machine-config-operator pod/machine-config-server-hqrdb node/ip-10-0-204-1.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:48:00.425 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-189-221.us-east-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-02T15:47:58.603Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-02T15:47:58.607Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-02T15:47:58.607Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-02T15:47:58.608Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-02T15:47:58.608Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-02T15:47:58.608Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-02T15:47:58.608Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-02T15:47:58.608Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-02T15:47:58.608Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-02T15:47:58.609Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-02T15:47:58.609Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-02T15:47:58.609Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-02T15:47:58.609Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-02T15:47:58.609Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-02T15:47:58.609Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-02T15:47:58.609Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-02
Jul 02 15:48:12.433 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-204-1.us-east-2.compute.internal node/ip-10-0-204-1.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:48:09.862127       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:48:09.870756       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:48:09.873603       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 15:48:09.873696       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 15:48:09.881332       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:48:15.135 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Jul 02 15:48:36.231 E ns/openshift-console pod/console-7785dd4dcf-pqqrk node/ip-10-0-160-31.us-east-2.compute.internal container=console container exited with code 2 (Error): 2020-07-02T15:30:54Z cmd/main: cookies are secure!\n2020-07-02T15:30:54Z cmd/main: Binding to [::]:8443...\n2020-07-02T15:30:54Z cmd/main: using TLS\n2020-07-02T15:32:03Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-4ztwk55l-b526d.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-4ztwk55l-b526d.origin-ci-int-aws.dev.rhcloud.com: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Jul 02 15:50:17.712 E ns/openshift-monitoring pod/node-exporter-sbjj5 node/ip-10-0-140-158.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:50:17.742 E ns/openshift-image-registry pod/node-ca-svg6v node/ip-10-0-140-158.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:50:17.771 E ns/openshift-cluster-node-tuning-operator pod/tuned-s8j26 node/ip-10-0-140-158.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:50:17.786 E ns/openshift-multus pod/multus-hgk8d node/ip-10-0-140-158.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:50:17.806 E ns/openshift-sdn pod/ovs-brglx node/ip-10-0-140-158.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:50:17.823 E ns/openshift-sdn pod/sdn-9qbm9 node/ip-10-0-140-158.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:50:17.837 E ns/openshift-dns pod/dns-default-g8682 node/ip-10-0-140-158.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:50:17.865 E ns/openshift-machine-config-operator pod/machine-config-daemon-f5ccv node/ip-10-0-140-158.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:05.809 E ns/openshift-monitoring pod/node-exporter-b2q4s node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:05.825 E ns/openshift-cluster-node-tuning-operator pod/tuned-z4d9r node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:05.837 E ns/openshift-controller-manager pod/controller-manager-5fxpx node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:05.853 E ns/openshift-image-registry pod/node-ca-4m9mz node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:05.865 E ns/openshift-sdn pod/sdn-controller-dffvt node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:05.899 E ns/openshift-sdn pod/ovs-6x5nb node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:05.915 E ns/openshift-multus pod/multus-admission-controller-hkfdq node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:05.951 E ns/openshift-multus pod/multus-hft48 node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:05.971 E ns/openshift-dns pod/dns-default-vq5vn node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:05.993 E ns/openshift-machine-config-operator pod/machine-config-daemon-275gl node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:06.008 E ns/openshift-machine-config-operator pod/machine-config-server-4mtm9 node/ip-10-0-160-31.us-east-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 02 15:51:36.070 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-31.us-east-2.compute.internal node/ip-10-0-160-31.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:51:34.798042       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:51:34.798423       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:51:34.801777       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0702 15:51:34.801870       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0702 15:51:34.804287       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 02 15:51:51.132 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-160-31.us-east-2.compute.internal node/ip-10-0-160-31.us-east-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0702 15:51:50.501051       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0702 15:51:50.502684       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0702 15:51:50.505400       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0702 15:51:50.506179       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n