ResultSUCCESS
Tests 4 failed / 23 succeeded
Started2020-07-09 09:42
Elapsed1h47m
Work namespaceci-op-mrbsy7t8
Refs release-4.4:7db5488e
333:6a70e322
pod73b4e2bd-c1c8-11ea-8043-0a580a800442
repoopenshift/cluster-api-provider-aws
revision1

Test Failures


Cluster upgrade Application behind service load balancer with PDB is not disrupted 42m14s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sApplication\sbehind\sservice\sload\sbalancer\swith\sPDB\sis\snot\sdisrupted$'
Service was unreachable during disruption for at least 1s of 38m30s (0%):

Jul 09 10:59:39.407 E ns/e2e-k8s-service-lb-available-3672 svc/service-test Service stopped responding to GET requests on reused connections
Jul 09 10:59:39.586 I ns/e2e-k8s-service-lb-available-3672 svc/service-test Service started responding to GET requests on reused connections
				from junit_upgrade_1594293563.xml

Filter through log files


Cluster upgrade Kubernetes APIs remain available 41m44s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 2s of 41m43s (0%):

Jul 09 10:43:34.753 E kube-apiserver Kube API started failing: Get https://api.ci-op-mrbsy7t8-e8e57.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: dial tcp 44.224.175.61:6443: connect: connection refused
Jul 09 10:43:35.588 E kube-apiserver Kube API is not responding to GET requests
Jul 09 10:43:35.760 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1594293563.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 41m44s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1s of 41m43s (0%):

Jul 09 10:43:34.655 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-mrbsy7t8-e8e57.origin-ci-int-aws.dev.rhcloud.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: dial tcp 52.27.40.6:6443: connect: connection refused
Jul 09 10:43:35.470 E openshift-apiserver OpenShift API is not responding to GET requests
Jul 09 10:43:35.761 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1594293563.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 47m13s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
158 error level events were detected during this test run:

Jul 09 10:35:37.187 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-cluster-version/cluster-version-operator is progressing NewReplicaSetAvailable: ReplicaSet "cluster-version-operator-5b66f98c87" has successfully progressed.
Jul 09 10:36:11.602 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-9d4cd4d5b-9sfr6 node/ip-10-0-135-176.us-west-2.compute.internal container=kube-apiserver-operator container exited with code 255 (Error): 0:36:10.587633       1 controller.go:331] Shutting down BoundSATokenSignerController\nI0709 10:36:10.587644       1 feature_upgradeable_controller.go:106] Shutting down FeatureUpgradeableController\nI0709 10:36:10.587649       1 status_controller.go:212] Shutting down StatusSyncer-kube-apiserver\nI0709 10:36:10.587654       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0709 10:36:10.587725       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0709 10:36:10.587739       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0709 10:36:10.587850       1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0709 10:36:10.587914       1 base_controller.go:49] Shutting down worker of  controller ...\nI0709 10:36:10.587934       1 base_controller.go:49] Shutting down worker of PruneController controller ...\nI0709 10:36:10.587945       1 base_controller.go:49] Shutting down worker of UnsupportedConfigOverridesController controller ...\nI0709 10:36:10.588589       1 secure_serving.go:222] Stopped listening on [::]:8443\nI0709 10:36:10.588634       1 base_controller.go:49] Shutting down worker of LoggingSyncer controller ...\nI0709 10:36:10.588648       1 base_controller.go:49] Shutting down worker of  controller ...\nI0709 10:36:10.588930       1 base_controller.go:49] Shutting down worker of InstallerStateController controller ...\nI0709 10:36:10.588943       1 base_controller.go:49] Shutting down worker of NodeController controller ...\nI0709 10:36:10.588996       1 base_controller.go:49] Shutting down worker of StaticPodStateController controller ...\nI0709 10:36:10.589013       1 base_controller.go:49] Shutting down worker of InstallerController controller ...\nI0709 10:36:10.590221       1 base_controller.go:39] All RevisionController workers have been terminated\n
Jul 09 10:36:39.682 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-6597d6f85b-j5ws4 node/ip-10-0-135-176.us-west-2.compute.internal container=kube-controller-manager-operator container exited with code 255 (Error): WatchBookmarks=true&resourceVersion=19129&timeout=6m11s&timeoutSeconds=371&watch=true: dial tcp [::1]:6443: connect: connection refused\\nI0709 10:27:02.796241       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cert-recovery-controller-lock: timed out waiting for the condition\\nF0709 10:27:02.796293       1 leaderelection.go:67] leaderelection lost\\n\"" to "NodeControllerDegraded: All master nodes are ready"\nI0709 10:36:38.713638       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0709 10:36:38.713802       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0709 10:36:38.713959       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0709 10:36:38.714035       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0709 10:36:38.714075       1 base_controller.go:74] Shutting down InstallerController ...\nI0709 10:36:38.714110       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0709 10:36:38.714141       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0709 10:36:38.714170       1 satokensigner_controller.go:332] Shutting down SATokenSignerController\nI0709 10:36:38.714201       1 base_controller.go:74] Shutting down NodeController ...\nI0709 10:36:38.714229       1 base_controller.go:74] Shutting down RevisionController ...\nI0709 10:36:38.714258       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0709 10:36:38.714287       1 base_controller.go:74] Shutting down  ...\nI0709 10:36:38.714315       1 base_controller.go:74] Shutting down PruneController ...\nI0709 10:36:38.714345       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0709 10:36:38.714373       1 status_controller.go:212] Shutting down StatusSyncer-kube-controller-manager\nI0709 10:36:38.714402       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\nF0709 10:36:38.714420       1 builder.go:209] server exited\n
Jul 09 10:36:45.703 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-7499496768-w7644 node/ip-10-0-135-176.us-west-2.compute.internal container=kube-scheduler-operator-container container exited with code 255 (Error): ft-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"1e1770dd-5697-43ed-9586-7e13c94adc98", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-203-40.us-west-2.compute.internal pods/openshift-kube-scheduler-ip-10-0-203-40.us-west-2.compute.internal container=\"kube-scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0709 10:36:44.660705       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0709 10:36:44.661514       1 base_controller.go:74] Shutting down InstallerController ...\nI0709 10:36:44.661545       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0709 10:36:44.661556       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0709 10:36:44.661566       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0709 10:36:44.661580       1 base_controller.go:74] Shutting down PruneController ...\nI0709 10:36:44.661590       1 base_controller.go:74] Shutting down NodeController ...\nI0709 10:36:44.661600       1 base_controller.go:74] Shutting down  ...\nI0709 10:36:44.661610       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0709 10:36:44.661619       1 base_controller.go:74] Shutting down RevisionController ...\nI0709 10:36:44.661629       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0709 10:36:44.661638       1 target_config_reconciler.go:126] Shutting down TargetConfigReconciler\nI0709 10:36:44.661648       1 status_controller.go:212] Shutting down StatusSyncer-kube-scheduler\nI0709 10:36:44.661658       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0709 10:36:44.661669       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nF0709 10:36:44.661841       1 builder.go:243] stopped\n
Jul 09 10:36:47.406 E clusteroperator/monitoring changed Degraded to True: UpdatingGrafanaFailed: Failed to rollout the stack. Error: running task Updating Grafana failed: deleting old Grafana configmaps failed: error listing configmaps with label selector monitoring.openshift.io/name=grafana,monitoring.openshift.io/hash!=39man1pbaa8jq: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
Jul 09 10:37:09.719 E ns/openshift-machine-api pod/machine-api-operator-68559fb64c-lzm79 node/ip-10-0-203-40.us-west-2.compute.internal container=machine-api-operator container exited with code 2 (Error): 
Jul 09 10:37:31.171 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-176.us-west-2.compute.internal node/ip-10-0-135-176.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0709 10:37:30.475114       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0709 10:37:30.476598       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0709 10:37:30.476666       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nI0709 10:37:30.476634       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0709 10:37:30.477793       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 09 10:37:46.324 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-176.us-west-2.compute.internal node/ip-10-0-135-176.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0709 10:37:45.816562       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0709 10:37:45.818202       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0709 10:37:45.820658       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0709 10:37:45.820689       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0709 10:37:45.821757       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 09 10:38:50.440 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-135-176.us-west-2.compute.internal node/ip-10-0-135-176.us-west-2.compute.internal container=kube-scheduler container exited with code 255 (Error):       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=25119&timeout=8m4s&timeoutSeconds=484&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:38:48.906311       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=25119&timeout=5m31s&timeoutSeconds=331&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:38:49.329215       1 webhook.go:109] Failed to make webhook authenticator request: Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused\nE0709 10:38:49.329244       1 authentication.go:104] Unable to authenticate the request due to an error: [invalid bearer token, Post https://localhost:6443/apis/authentication.k8s.io/v1/tokenreviews: dial tcp [::1]:6443: connect: connection refused]\nE0709 10:38:49.329294       1 writers.go:105] apiserver was unable to write a JSON response: no kind is registered for the type v1.Status in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"\nE0709 10:38:49.329309       1 status.go:71] apiserver received an error that is not an metav1.Status: &runtime.notRegisteredErr{schemeName:"k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30", gvk:schema.GroupVersionKind{Group:"", Version:"", Kind:""}, target:runtime.GroupVersioner(nil), t:(*reflect.rtype)(0x1a38a40)}\nI0709 10:38:49.751798       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0709 10:38:49.751826       1 server.go:257] leaderelection lost\n
Jul 09 10:38:55.186 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-203-40.us-west-2.compute.internal node/ip-10-0-203-40.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0709 10:38:53.844505       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0709 10:38:53.846543       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0709 10:38:53.846606       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0709 10:38:53.846667       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0709 10:38:53.847812       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 09 10:39:14.505 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-176.us-west-2.compute.internal node/ip-10-0-135-176.us-west-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 09 10:39:18.244 E ns/openshift-machine-api pod/machine-api-controllers-578bc87857-c6s8x node/ip-10-0-172-138.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 09 10:39:22.687 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-135-176.us-west-2.compute.internal node/ip-10-0-135-176.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): esourceQuota: Get https://localhost:6443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=25100&timeout=9m11s&timeoutSeconds=551&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:39:21.942339       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:6443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=25100&timeout=9m54s&timeoutSeconds=594&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:39:21.943360       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:6443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=22936&timeout=7m23s&timeoutSeconds=443&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:39:21.944393       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=24532&timeout=5m25s&timeoutSeconds=325&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:39:21.948394       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://localhost:6443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=25100&timeout=5m25s&timeoutSeconds=325&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:39:21.960865       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:6443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=25938&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0709 10:39:21.962337       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0709 10:39:21.962373       1 policy_controller.go:94] leaderelection lost\n
Jul 09 10:39:55.476 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-172-138.us-west-2.compute.internal node/ip-10-0-172-138.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0709 10:39:54.856127       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0709 10:39:54.856482       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0709 10:39:54.858062       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0709 10:39:54.858078       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0709 10:39:54.858647       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 09 10:40:11.561 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-172-138.us-west-2.compute.internal node/ip-10-0-172-138.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): I0709 10:40:10.816293       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0709 10:40:10.817577       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0709 10:40:10.818931       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0709 10:40:10.818977       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0709 10:40:10.819313       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jul 09 10:40:57.583 E ns/openshift-insights pod/insights-operator-8f6cb95c4-lfs86 node/ip-10-0-203-40.us-west-2.compute.internal container=operator container exited with code 2 (Error): -install-manifests/version with fingerprint=\nI0709 10:39:19.290191       1 diskrecorder.go:63] Recording config/version with fingerprint=\nI0709 10:39:19.290317       1 diskrecorder.go:63] Recording config/id with fingerprint=\nI0709 10:39:19.293459       1 diskrecorder.go:63] Recording config/infrastructure with fingerprint=\nI0709 10:39:19.296250       1 diskrecorder.go:63] Recording config/network with fingerprint=\nI0709 10:39:19.299134       1 diskrecorder.go:63] Recording config/authentication with fingerprint=\nI0709 10:39:19.301584       1 diskrecorder.go:63] Recording config/imageregistry with fingerprint=\nI0709 10:39:19.304174       1 diskrecorder.go:63] Recording config/featuregate with fingerprint=\nI0709 10:39:19.306457       1 diskrecorder.go:63] Recording config/oauth with fingerprint=\nI0709 10:39:19.309268       1 diskrecorder.go:63] Recording config/ingress with fingerprint=\nI0709 10:39:19.312549       1 diskrecorder.go:63] Recording config/proxy with fingerprint=\nI0709 10:39:19.319141       1 diskrecorder.go:170] Writing 51 records to /var/lib/insights-operator/insights-2020-07-09-103919.tar.gz\nI0709 10:39:19.322281       1 diskrecorder.go:134] Wrote 51 records to disk in 3ms\nI0709 10:39:19.322301       1 periodic.go:151] Periodic gather config completed in 87ms\nI0709 10:39:36.168314       1 httplog.go:90] GET /metrics: (4.946752ms) 200 [Prometheus/2.15.2 10.129.2.14:51130]\nI0709 10:39:45.160391       1 httplog.go:90] GET /metrics: (1.552195ms) 200 [Prometheus/2.15.2 10.131.0.18:45198]\nI0709 10:40:06.169972       1 httplog.go:90] GET /metrics: (6.633003ms) 200 [Prometheus/2.15.2 10.129.2.14:51130]\nI0709 10:40:15.160498       1 httplog.go:90] GET /metrics: (1.633128ms) 200 [Prometheus/2.15.2 10.131.0.18:45198]\nI0709 10:40:36.168621       1 httplog.go:90] GET /metrics: (5.307727ms) 200 [Prometheus/2.15.2 10.129.2.14:51130]\nI0709 10:40:45.160318       1 httplog.go:90] GET /metrics: (1.449111ms) 200 [Prometheus/2.15.2 10.131.0.18:45198]\nI0709 10:40:52.322796       1 status.go:298] The operator is healthy\n
Jul 09 10:41:10.033 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-64db5ftns node/ip-10-0-135-176.us-west-2.compute.internal container=operator container exited with code 255 (Error): 1.549357ms) 200 [Prometheus/2.15.2 10.129.2.14:39498]\nI0709 10:39:13.067340       1 httplog.go:90] GET /metrics: (5.203726ms) 200 [Prometheus/2.15.2 10.131.0.18:40524]\nI0709 10:39:19.283571       1 httplog.go:90] GET /metrics: (1.618868ms) 200 [Prometheus/2.15.2 10.129.2.14:39498]\nI0709 10:39:22.367582       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0709 10:39:22.368399       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0709 10:39:22.368526       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0709 10:39:22.369150       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0709 10:39:22.370069       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0709 10:39:22.370160       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0709 10:39:24.313843       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0709 10:39:43.074973       1 httplog.go:90] GET /metrics: (6.389564ms) 200 [Prometheus/2.15.2 10.131.0.18:40524]\nI0709 10:39:49.284115       1 httplog.go:90] GET /metrics: (2.03816ms) 200 [Prometheus/2.15.2 10.129.2.14:39498]\nI0709 10:40:13.067610       1 httplog.go:90] GET /metrics: (5.521698ms) 200 [Prometheus/2.15.2 10.131.0.18:40524]\nI0709 10:40:19.283661       1 httplog.go:90] GET /metrics: (1.602358ms) 200 [Prometheus/2.15.2 10.129.2.14:39498]\nI0709 10:40:43.066881       1 httplog.go:90] GET /metrics: (4.798667ms) 200 [Prometheus/2.15.2 10.131.0.18:40524]\nI0709 10:40:49.283494       1 httplog.go:90] GET /metrics: (1.498986ms) 200 [Prometheus/2.15.2 10.129.2.14:39498]\nI0709 10:41:09.049889       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0709 10:41:09.050062       1 operator.go:227] Shutting down ServiceCatalogControllerManagerOperator\nF0709 10:41:09.050106       1 builder.go:243] stopped\n
Jul 09 10:41:10.644 E ns/openshift-monitoring pod/node-exporter-m5l46 node/ip-10-0-203-40.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -09T10:15:53Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-09T10:15:53Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 09 10:41:19.312 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-7.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/09 10:21:55 Watching directory: "/etc/alertmanager/config"\n
Jul 09 10:41:19.312 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-151-7.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/09 10:21:55 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/09 10:21:55 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/09 10:21:55 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/09 10:21:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/09 10:21:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/09 10:21:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/09 10:21:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0709 10:21:55.828200       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/09 10:21:55 http.go:107: HTTPS: listening on [::]:9095\n
Jul 09 10:41:21.328 E ns/openshift-monitoring pod/kube-state-metrics-fbff5b64-zw7rm node/ip-10-0-151-7.us-west-2.compute.internal container=kube-state-metrics container exited with code 2 (Error): 
Jul 09 10:41:22.048 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5d6d84d5b6-qnsth node/ip-10-0-252-241.us-west-2.compute.internal container=snapshot-controller container exited with code 2 (Error): 
Jul 09 10:41:23.335 E ns/openshift-monitoring pod/openshift-state-metrics-77c568f86b-9c2pp node/ip-10-0-151-7.us-west-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jul 09 10:41:36.256 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-console/downloads is progressing ReplicaSetUpdated: ReplicaSet "downloads-6d588b7676" is progressing.
Jul 09 10:41:46.832 E ns/openshift-controller-manager pod/controller-manager-cwvmk node/ip-10-0-203-40.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): I0709 10:16:25.502337       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (v0.0.0-alpha.0-109-g75548a0)\nI0709 10:16:25.503772       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-mrbsy7t8/stable-initial@sha256:78433632795a6e7b402ca35507b03cea9d8dfb1b28321e3284acc04d3760db79"\nI0709 10:16:25.503789       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-mrbsy7t8/stable-initial@sha256:2e9fa701fb05ce0c7a3a0ce59d48165fbc50bedfbe3033f5eec1051fbda305b0"\nI0709 10:16:25.503879       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\nI0709 10:16:25.503900       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\n
Jul 09 10:41:47.056 E ns/openshift-controller-manager pod/controller-manager-5tzxl node/ip-10-0-172-138.us-west-2.compute.internal container=controller-manager container exited with code 137 (Error): I0709 10:16:31.339448       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (v0.0.0-alpha.0-109-g75548a0)\nI0709 10:16:31.341269       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-mrbsy7t8/stable-initial@sha256:78433632795a6e7b402ca35507b03cea9d8dfb1b28321e3284acc04d3760db79"\nI0709 10:16:31.341283       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-mrbsy7t8/stable-initial@sha256:2e9fa701fb05ce0c7a3a0ce59d48165fbc50bedfbe3033f5eec1051fbda305b0"\nI0709 10:16:31.341345       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0709 10:16:31.342036       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
Jul 09 10:41:53.454 E ns/openshift-monitoring pod/telemeter-client-549f987f4f-stprk node/ip-10-0-134-254.us-west-2.compute.internal container=reload container exited with code 2 (Error): 
Jul 09 10:41:53.454 E ns/openshift-monitoring pod/telemeter-client-549f987f4f-stprk node/ip-10-0-134-254.us-west-2.compute.internal container=telemeter-client container exited with code 2 (Error): 
Jul 09 10:42:00.153 E ns/openshift-monitoring pod/prometheus-adapter-79fff5bb6f-mxfqw node/ip-10-0-252-241.us-west-2.compute.internal container=prometheus-adapter container exited with code 2 (Error): I0709 10:21:13.980716       1 adapter.go:93] successfully using in-cluster auth\nI0709 10:21:14.807815       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jul 09 10:42:13.913 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-134-254.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-09T10:42:07.620Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-09T10:42:07.623Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-09T10:42:07.624Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-09T10:42:07.625Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-09T10:42:07.625Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-09T10:42:07.625Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-09T10:42:07.625Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-09T10:42:07.625Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-09T10:42:07.625Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-09T10:42:07.625Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-09T10:42:07.625Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-09T10:42:07.625Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-09T10:42:07.625Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-09T10:42:07.625Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-09T10:42:07.626Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-09T10:42:07.626Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-09
Jul 09 10:42:35.369 E ns/openshift-monitoring pod/thanos-querier-545bbbfb-cbs8w node/ip-10-0-252-241.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): sabled\n2020/07/09 10:23:19 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/09 10:23:19 http.go:107: HTTPS: listening on [::]:9091\nI0709 10:23:19.417055       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/09 10:23:57 oauthproxy.go:774: basicauth: 10.128.0.6:56444 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:25:57 oauthproxy.go:774: basicauth: 10.128.0.6:57724 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:27:57 oauthproxy.go:774: basicauth: 10.128.0.6:60834 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:29:57 oauthproxy.go:774: basicauth: 10.128.0.6:37734 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:32:56 oauthproxy.go:774: basicauth: 10.128.0.6:40036 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:32:57 oauthproxy.go:774: basicauth: 10.128.0.6:40048 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:34:57 oauthproxy.go:774: basicauth: 10.128.0.6:41366 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:36:11 oauthproxy.go:774: basicauth: 10.130.0.49:51408 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:37:10 oauthproxy.go:774: basicauth: 10.130.0.49:55970 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:39:10 oauthproxy.go:774: basicauth: 10.130.0.49:35862 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:41:10 oauthproxy.go:774: basicauth: 10.130.0.49:37452 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:42:10 oauthproxy.go:774: basicauth: 10.130.0.49:38938 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 09 10:42:36.591 E ns/openshift-marketplace pod/redhat-marketplace-5855bdb54d-5n7fr node/ip-10-0-151-7.us-west-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Jul 09 10:42:39.066 E ns/openshift-console-operator pod/console-operator-5d99855895-gqbdz node/ip-10-0-203-40.us-west-2.compute.internal container=console-operator container exited with code 255 (Error): W0709 10:37:17.674902       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 735; INTERNAL_ERROR") has prevented the request from succeeding\nW0709 10:38:06.748762       1 reflector.go:326] github.com/openshift/client-go/oauth/informers/externalversions/factory.go:101: watch of *v1.OAuthClient ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 621; INTERNAL_ERROR") has prevented the request from succeeding\nW0709 10:38:06.756051       1 reflector.go:326] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 943; INTERNAL_ERROR") has prevented the request from succeeding\nI0709 10:42:37.965336       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0709 10:42:37.968611       1 controller.go:138] shutting down ConsoleServiceSyncController\nI0709 10:42:37.968664       1 controller.go:109] shutting down ConsoleResourceSyncDestinationController\nI0709 10:42:37.968695       1 controller.go:70] Shutting down Console\nI0709 10:42:37.968717       1 configmap_cafile_content.go:226] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0709 10:42:37.968738       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0709 10:42:37.968760       1 status_controller.go:212] Shutting down StatusSyncer-console\nI0709 10:42:37.968779       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0709 10:42:37.968791       1 management_state_controller.go:112] Shutting down management-state-controller-console\nI0709 10:42:37.968809       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nF0709 10:42:37.968623       1 builder.go:209] server exited\n
Jul 09 10:42:41.395 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-252-241.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-09T10:42:36.932Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-09T10:42:36.938Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-09T10:42:36.938Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-09T10:42:36.939Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-09T10:42:36.939Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-09T10:42:36.939Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-09T10:42:36.939Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-09T10:42:36.939Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-09T10:42:36.939Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-09T10:42:36.939Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-09T10:42:36.939Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-09T10:42:36.939Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-09T10:42:36.939Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-09T10:42:36.939Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-09T10:42:36.940Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-09T10:42:36.940Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-09
Jul 09 10:42:41.662 E ns/openshift-marketplace pod/redhat-operators-6d6ffd7b66-5ltbm node/ip-10-0-151-7.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jul 09 10:42:42.414 E ns/openshift-monitoring pod/node-exporter-n57f8 node/ip-10-0-135-176.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -09T10:15:54Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-09T10:15:54Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 09 10:43:00.451 E ns/openshift-monitoring pod/node-exporter-f2qsh node/ip-10-0-252-241.us-west-2.compute.internal container=node-exporter container exited with code 143 (Error): -09T10:19:41Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-07-09T10:19:41Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
Jul 09 10:43:02.653 E ns/openshift-marketplace pod/certified-operators-77b88db757-n7tqc node/ip-10-0-151-7.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 09 10:44:10.582 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-172-138.us-west-2.compute.internal node/ip-10-0-172-138.us-west-2.compute.internal container=setup init container exited with code 124 (Error): ................................................................................
Jul 09 10:44:17.676 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-172-138.us-west-2.compute.internal node/ip-10-0-172-138.us-west-2.compute.internal container=cluster-policy-controller container exited with code 255 (Error): 97&timeout=9m40s&timeoutSeconds=580&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:44:16.950997       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://localhost:6443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=31017&timeout=9m21s&timeoutSeconds=561&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:44:16.954519       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://localhost:6443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=22792&timeout=9m43s&timeoutSeconds=583&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:44:16.960209       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://localhost:6443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=25099&timeout=5m45s&timeoutSeconds=345&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:44:16.961428       1 reflector.go:307] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: Get https://localhost:6443/apis/quota.openshift.io/v1/clusterresourcequotas?allowWatchBookmarks=true&resourceVersion=24532&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0709 10:44:16.962764       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://localhost:6443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=25097&timeout=9m49s&timeoutSeconds=589&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0709 10:44:16.963551       1 leaderelection.go:288] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0709 10:44:16.963588       1 policy_controller.go:94] leaderelection lost\n
Jul 09 10:47:13.920 E ns/openshift-sdn pod/sdn-controller-9xgkj node/ip-10-0-203-40.us-west-2.compute.internal container=sdn-controller container exited with code 2 (Error): I0709 10:09:38.942828       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0709 10:14:23.041010       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-mrbsy7t8-e8e57.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
Jul 09 10:47:33.234 E ns/openshift-multus pod/multus-admission-controller-sw674 node/ip-10-0-172-138.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 09 10:48:16.176 E ns/openshift-multus pod/multus-admission-controller-kg9tc node/ip-10-0-203-40.us-west-2.compute.internal container=multus-admission-controller container exited with code 137 (Error): 
Jul 09 10:48:18.115 E ns/openshift-sdn pod/sdn-2hkss node/ip-10-0-203-40.us-west-2.compute.internal container=sdn container exited with code 255 (Error): 0.0.76:8443]\nI0709 10:47:45.256830  104358 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.16:6443 10.130.0.76:6443]\nI0709 10:47:45.256957  104358 roundrobin.go:217] Delete endpoint 10.129.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0709 10:47:45.257025  104358 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.16:8443 10.130.0.76:8443]\nI0709 10:47:45.257075  104358 roundrobin.go:217] Delete endpoint 10.129.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0709 10:47:45.401422  104358 proxier.go:368] userspace proxy: processing 0 service events\nI0709 10:47:45.401444  104358 proxier.go:347] userspace syncProxyRules took 36.72106ms\nI0709 10:47:45.532382  104358 proxier.go:368] userspace proxy: processing 0 service events\nI0709 10:47:45.532403  104358 proxier.go:347] userspace syncProxyRules took 25.164601ms\nI0709 10:48:15.501156  104358 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nE0709 10:48:15.501187  104358 pod.go:232] Error updating OVS multicast flows for VNID 14001481: exit status 1\nI0709 10:48:15.504738  104358 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)\nI0709 10:48:15.507236  104358 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-kg9tc\nI0709 10:48:15.662788  104358 proxier.go:368] userspace proxy: processing 0 service events\nI0709 10:48:15.662814  104358 proxier.go:347] userspace syncProxyRules took 33.572383ms\nI0709 10:48:17.131380  104358 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0709 10:48:17.131411  104358 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 09 10:48:29.252 E ns/openshift-multus pod/multus-56cwq node/ip-10-0-203-40.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 09 10:48:36.222 E ns/openshift-sdn pod/sdn-tlc8v node/ip-10-0-252-241.us-west-2.compute.internal container=sdn container exited with code 255 (Error): ing healthcheck "openshift-ingress/router-default" on port 31149\nI0709 10:48:26.354434   82008 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0709 10:48:26.354487   82008 cmd.go:173] openshift-sdn network plugin registering startup\nI0709 10:48:26.354596   82008 cmd.go:177] openshift-sdn network plugin ready\nI0709 10:48:27.129370   82008 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.16:6443 10.129.0.73:6443 10.130.0.76:6443]\nI0709 10:48:27.129406   82008 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.16:8443 10.129.0.73:8443 10.130.0.76:8443]\nI0709 10:48:27.141115   82008 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.73:6443 10.130.0.76:6443]\nI0709 10:48:27.141153   82008 roundrobin.go:217] Delete endpoint 10.128.0.16:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0709 10:48:27.141172   82008 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.73:8443 10.130.0.76:8443]\nI0709 10:48:27.141185   82008 roundrobin.go:217] Delete endpoint 10.128.0.16:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0709 10:48:27.266743   82008 proxier.go:368] userspace proxy: processing 0 service events\nI0709 10:48:27.266768   82008 proxier.go:347] userspace syncProxyRules took 28.523863ms\nI0709 10:48:27.399548   82008 proxier.go:368] userspace proxy: processing 0 service events\nI0709 10:48:27.399605   82008 proxier.go:347] userspace syncProxyRules took 31.388751ms\nI0709 10:48:35.108069   82008 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0709 10:48:35.108119   82008 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 09 10:49:00.512 E ns/openshift-sdn pod/sdn-7kbdf node/ip-10-0-172-138.us-west-2.compute.internal container=sdn container exited with code 255 (Error): .942396  107442 cmd.go:173] openshift-sdn network plugin registering startup\nI0709 10:48:04.942495  107442 cmd.go:177] openshift-sdn network plugin ready\nI0709 10:48:27.128702  107442 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.16:6443 10.129.0.73:6443 10.130.0.76:6443]\nI0709 10:48:27.128739  107442 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.16:8443 10.129.0.73:8443 10.130.0.76:8443]\nI0709 10:48:27.140299  107442 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.73:6443 10.130.0.76:6443]\nI0709 10:48:27.140328  107442 roundrobin.go:217] Delete endpoint 10.128.0.16:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0709 10:48:27.140342  107442 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.73:8443 10.130.0.76:8443]\nI0709 10:48:27.140349  107442 roundrobin.go:217] Delete endpoint 10.128.0.16:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0709 10:48:27.312100  107442 proxier.go:368] userspace proxy: processing 0 service events\nI0709 10:48:27.312130  107442 proxier.go:347] userspace syncProxyRules took 36.22055ms\nI0709 10:48:27.457369  107442 proxier.go:368] userspace proxy: processing 0 service events\nI0709 10:48:27.457489  107442 proxier.go:347] userspace syncProxyRules took 45.740025ms\nI0709 10:48:57.584565  107442 proxier.go:368] userspace proxy: processing 0 service events\nI0709 10:48:57.584584  107442 proxier.go:347] userspace syncProxyRules took 24.662299ms\nI0709 10:48:59.492891  107442 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0709 10:48:59.492920  107442 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jul 09 10:49:21.378 E ns/openshift-multus pod/multus-cwgdp node/ip-10-0-151-7.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 09 10:50:12.528 E ns/openshift-multus pod/multus-zqtdr node/ip-10-0-252-241.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 09 10:52:09.578 E ns/openshift-multus pod/multus-d68mx node/ip-10-0-134-254.us-west-2.compute.internal container=kube-multus container exited with code 137 (Error): 
Jul 09 10:53:15.366 E ns/openshift-machine-config-operator pod/machine-config-operator-84b85bcddd-m4f6x node/ip-10-0-135-176.us-west-2.compute.internal container=machine-config-operator container exited with code 2 (Error): nged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-07-09-095402}]\nE0709 10:10:41.263733       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0709 10:10:41.266589       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0709 10:10:42.270680       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0709 10:10:46.285481       1 sync.go:61] [init mode] synced RenderConfig in 5.461190204s\nI0709 10:10:46.461690       1 sync.go:61] [init mode] synced MachineConfigPools in 175.827787ms\nI0709 10:11:40.407402       1 sync.go:61] [init mode] synced MachineConfigDaemon in 53.945679034s\nI0709 10:11:44.451308       1 sync.go:61] [init mode] synced MachineConfigController in 4.043871046s\nI0709 10:11:53.496087       1 sync.go:61] [init mode] synced MachineConfigServer in 9.044723488s\nI0709 10:12:10.501644       1 sync.go:61] [init mode] synced RequiredPools in 17.005520109s\nI0709 10:12:10.712982       1 sync.go:89] Initialization complete\nE0709 10:12:47.393880       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: etcdserver: request timed out\nE0709 10:14:23.039164       1 leaderelection.go:331] error retrieving resource lock openshift-machine-config-operator/machine-config: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config: unexpected EOF\n
Jul 09 10:55:10.655 E ns/openshift-machine-config-operator pod/machine-config-daemon-l5l6m node/ip-10-0-172-138.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 09 10:55:38.165 E ns/openshift-machine-config-operator pod/machine-config-daemon-lt72n node/ip-10-0-252-241.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 09 10:55:46.963 E ns/openshift-machine-config-operator pod/machine-config-daemon-vz6d2 node/ip-10-0-151-7.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 09 10:56:14.856 E ns/openshift-machine-config-operator pod/machine-config-daemon-qfzqm node/ip-10-0-135-176.us-west-2.compute.internal container=oauth-proxy container exited with code 143 (Error): 
Jul 09 10:58:38.287 E ns/openshift-machine-config-operator pod/machine-config-server-4kzqv node/ip-10-0-172-138.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0709 10:11:45.278214       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-14-g1cb23e03-dirty (1cb23e039073a7452086dfefa2d492f621f7989a)\nI0709 10:11:45.279460       1 api.go:56] Launching server on :22624\nI0709 10:11:45.279531       1 api.go:56] Launching server on :22623\nI0709 10:16:41.364870       1 api.go:102] Pool worker requested by 10.0.255.11:9066\n
Jul 09 10:58:41.261 E ns/openshift-machine-config-operator pod/machine-config-server-8fghz node/ip-10-0-135-176.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0709 10:11:45.426885       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-14-g1cb23e03-dirty (1cb23e039073a7452086dfefa2d492f621f7989a)\nI0709 10:11:45.427588       1 api.go:56] Launching server on :22624\nI0709 10:11:45.427820       1 api.go:56] Launching server on :22623\nI0709 10:16:38.059505       1 api.go:102] Pool worker requested by 10.0.181.192:23312\n
Jul 09 10:58:43.862 E ns/openshift-machine-config-operator pod/machine-config-server-gk5ns node/ip-10-0-203-40.us-west-2.compute.internal container=machine-config-server container exited with code 2 (Error): I0709 10:11:52.793516       1 start.go:38] Version: machine-config-daemon-4.4.0-202006242133.p0-14-g1cb23e03-dirty (1cb23e039073a7452086dfefa2d492f621f7989a)\nI0709 10:11:52.794675       1 api.go:56] Launching server on :22624\nI0709 10:11:52.801539       1 api.go:56] Launching server on :22623\nI0709 10:16:32.112645       1 api.go:102] Pool worker requested by 10.0.255.11:6268\n
Jul 09 10:58:48.833 E ns/openshift-monitoring pod/thanos-querier-6999586cb-2dwxs node/ip-10-0-134-254.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/09 10:41:47 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/09 10:41:47 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/09 10:41:47 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/09 10:41:47 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/09 10:41:47 http.go:107: HTTPS: listening on [::]:9091\nI0709 10:41:47.675052       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/09 10:44:10 oauthproxy.go:774: basicauth: 10.130.0.49:49394 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:45:10 oauthproxy.go:774: basicauth: 10.130.0.49:53834 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:48:10 oauthproxy.go:774: basicauth: 10.130.0.49:56314 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:50:10 oauthproxy.go:774: basicauth: 10.130.0.49:57854 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:52:10 oauthproxy.go:774: basicauth: 10.130.0.49:59308 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:53:10 oauthproxy.go:774: basicauth: 10.130.0.49:60002 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:54:10 oauthproxy.go:774: basicauth: 10.130.0.49:60676 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:55:10 oauthproxy.go:774: basicauth: 10.130.0.49:33138 Authorization header does not start with 'Basic', skipping basic authentication\n2020/07/09 10:56:10 oauthproxy.go:774: basicauth: 10.130.0.49:34004 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 09 10:58:49.947 E ns/openshift-marketplace pod/certified-operators-7774b7f5cf-qdkgn node/ip-10-0-134-254.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 09 10:58:50.916 E ns/openshift-monitoring pod/openshift-state-metrics-fd5667d9c-8hrvp node/ip-10-0-134-254.us-west-2.compute.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jul 09 10:58:55.796 E ns/openshift-service-ca pod/service-ca-84cd88fc6d-5k7nh node/ip-10-0-135-176.us-west-2.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Jul 09 10:59:28.904 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-151-7.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-09T10:59:09.450Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-09T10:59:09.453Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-09T10:59:09.455Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-09T10:59:09.456Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-09T10:59:09.456Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-09T10:59:09.456Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-09T10:59:09.456Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-09T10:59:09.456Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-09T10:59:09.456Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-09T10:59:09.456Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-09T10:59:09.456Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-09T10:59:09.456Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-09T10:59:09.456Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-09T10:59:09.457Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-09T10:59:09.458Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-09T10:59:09.458Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-09
Jul 09 11:00:09.027 E ns/openshift-marketplace pod/community-operators-596494b5f9-qprhw node/ip-10-0-151-7.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 09 11:00:32.944 E ns/openshift-marketplace pod/certified-operators-6869f96c99-vhwlq node/ip-10-0-252-241.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 09 11:01:29.308 E ns/openshift-image-registry pod/node-ca-lkb8c node/ip-10-0-134-254.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:01:29.309 E ns/openshift-cluster-node-tuning-operator pod/tuned-r52zt node/ip-10-0-134-254.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:01:29.323 E ns/openshift-monitoring pod/node-exporter-6lrhl node/ip-10-0-134-254.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:01:29.334 E ns/openshift-sdn pod/sdn-pqh9h node/ip-10-0-134-254.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:01:29.348 E ns/openshift-sdn pod/ovs-jcf4x node/ip-10-0-134-254.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:01:29.371 E ns/openshift-multus pod/multus-mkfxn node/ip-10-0-134-254.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:01:29.376 E ns/openshift-dns pod/dns-default-55c8k node/ip-10-0-134-254.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:01:29.396 E ns/openshift-machine-config-operator pod/machine-config-daemon-6wnst node/ip-10-0-134-254.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:01:38.559 E ns/openshift-machine-config-operator pod/machine-config-daemon-6wnst node/ip-10-0-134-254.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 09 11:01:38.747 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Jul 09 11:01:47.360 E ns/openshift-monitoring pod/thanos-querier-6999586cb-h7nbm node/ip-10-0-151-7.us-west-2.compute.internal container=oauth-proxy container exited with code 2 (Error): 2020/07/09 10:58:55 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/09 10:58:55 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/09 10:58:55 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/09 10:58:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/07/09 10:58:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/09 10:58:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/07/09 10:58:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/07/09 10:58:55 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/07/09 10:58:55 http.go:107: HTTPS: listening on [::]:9091\nI0709 10:58:55.736241       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/09 11:00:20 oauthproxy.go:774: basicauth: 10.130.0.49:52750 Authorization header does not start with 'Basic', skipping basic authentication\n
Jul 09 11:01:48.452 E ns/openshift-marketplace pod/certified-operators-b79956c7c-kd8fj node/ip-10-0-151-7.us-west-2.compute.internal container=certified-operators container exited with code 2 (Error): 
Jul 09 11:01:48.482 E ns/openshift-monitoring pod/grafana-88fc454b6-vv7hm node/ip-10-0-151-7.us-west-2.compute.internal container=grafana container exited with code 1 (Error): 
Jul 09 11:01:48.482 E ns/openshift-monitoring pod/grafana-88fc454b6-vv7hm node/ip-10-0-151-7.us-west-2.compute.internal container=grafana-proxy container exited with code 2 (Error): 
Jul 09 11:01:48.516 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-7.us-west-2.compute.internal container=config-reloader container exited with code 2 (Error): 2020/07/09 10:59:09 Watching directory: "/etc/alertmanager/config"\n
Jul 09 11:01:48.516 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-151-7.us-west-2.compute.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/07/09 10:59:09 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/09 10:59:09 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/07/09 10:59:09 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/07/09 10:59:09 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/07/09 10:59:09 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/07/09 10:59:09 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/07/09 10:59:09 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0709 10:59:09.599133       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/07/09 10:59:09 http.go:107: HTTPS: listening on [::]:9095\n
Jul 09 11:01:57.722 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-135-176.us-west-2.compute.internal" not ready since 2020-07-09 10:59:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-135-176.us-west-2.compute.internal is unhealthy
Jul 09 11:01:57.734 E clusteroperator/kube-scheduler changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-135-176.us-west-2.compute.internal" not ready since 2020-07-09 10:59:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Jul 09 11:01:57.737 E clusteroperator/kube-apiserver changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-135-176.us-west-2.compute.internal" not ready since 2020-07-09 10:59:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Jul 09 11:01:57.750 E clusteroperator/kube-controller-manager changed Degraded to True: NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-135-176.us-west-2.compute.internal" not ready since 2020-07-09 10:59:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
Jul 09 11:02:04.164 E ns/openshift-controller-manager pod/controller-manager-958cn node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.182 E ns/openshift-cluster-node-tuning-operator pod/tuned-k9g7q node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.202 E ns/openshift-image-registry pod/node-ca-svwr5 node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.218 E ns/openshift-monitoring pod/node-exporter-glxhz node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.229 E ns/openshift-sdn pod/ovs-s5mjd node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.241 E ns/openshift-sdn pod/sdn-controller-xlz97 node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.253 E ns/openshift-multus pod/multus-fsp99 node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.266 E ns/openshift-sdn pod/sdn-vn8kj node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.276 E ns/openshift-multus pod/multus-admission-controller-2fbph node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.289 E ns/openshift-dns pod/dns-default-665cp node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.303 E ns/openshift-machine-config-operator pod/machine-config-daemon-wsfkh node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:04.313 E ns/openshift-machine-config-operator pod/machine-config-server-2fwqt node/ip-10-0-135-176.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:02:14.684 E ns/openshift-machine-config-operator pod/machine-config-daemon-wsfkh node/ip-10-0-135-176.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 09 11:02:35.074 E ns/openshift-machine-api pod/machine-api-controllers-7f8cbf55d5-c4cxl node/ip-10-0-203-40.us-west-2.compute.internal container=controller-manager container exited with code 1 (Error): 
Jul 09 11:02:52.235 E ns/openshift-console pod/console-8d6f449b9-l9h2c node/ip-10-0-203-40.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020-07-09T10:44:05Z cmd/main: cookies are secure!\n2020-07-09T10:44:05Z cmd/main: Binding to [::]:8443...\n2020-07-09T10:44:05Z cmd/main: using TLS\n2020-07-09T10:59:29Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n2020-07-09T10:59:31Z auth: failed to get latest auth source data: Get https://kubernetes.default.svc/.well-known/oauth-authorization-server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
Jul 09 11:04:28.911 E ns/openshift-image-registry pod/node-ca-qkp96 node/ip-10-0-151-7.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:04:28.925 E ns/openshift-cluster-node-tuning-operator pod/tuned-q8czg node/ip-10-0-151-7.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:04:28.941 E ns/openshift-monitoring pod/node-exporter-mw5dm node/ip-10-0-151-7.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:04:28.949 E ns/openshift-sdn pod/ovs-762qb node/ip-10-0-151-7.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:04:28.971 E ns/openshift-sdn pod/sdn-wv756 node/ip-10-0-151-7.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:04:28.997 E ns/openshift-multus pod/multus-rsq2q node/ip-10-0-151-7.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:04:29.010 E ns/openshift-dns pod/dns-default-mjjfr node/ip-10-0-151-7.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:04:29.010 E ns/openshift-machine-config-operator pod/machine-config-daemon-mrmv4 node/ip-10-0-151-7.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:04:38.296 E ns/openshift-machine-config-operator pod/machine-config-daemon-mrmv4 node/ip-10-0-151-7.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 09 11:04:46.788 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-66fb9bfbc6-vf96n node/ip-10-0-252-241.us-west-2.compute.internal container=operator container exited with code 255 (Error): io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=38778&timeout=9m59s&timeoutSeconds=599&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0709 11:03:01.344714       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0709 11:03:01.344768       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0709 11:03:01.345025       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nE0709 11:03:01.345098       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=38780&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp 172.30.0.1:443: connect: connection refused\nI0709 11:03:02.379044       1 operator.go:146] Starting syncing operator at 2020-07-09 11:03:02.379031819 +0000 UTC m=+1310.454361990\nI0709 11:03:02.485636       1 operator.go:148] Finished syncing operator at 106.592179ms\nI0709 11:04:45.358624       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0709 11:04:45.358958       1 dynamic_serving_content.go:144] Shutting down serving-cert::/tmp/serving-cert-304950491/tls.crt::/tmp/serving-cert-304950491/tls.key\nI0709 11:04:45.359182       1 management_state_controller.go:112] Shutting down management-state-controller-csi-snapshot-controller\nI0709 11:04:45.359207       1 logging_controller.go:93] Shutting down LogLevelController\nI0709 11:04:45.359221       1 status_controller.go:212] Shutting down StatusSyncer-csi-snapshot-controller\nF0709 11:04:45.359290       1 builder.go:243] stopped\n
Jul 09 11:04:47.779 E ns/openshift-marketplace pod/redhat-operators-57b8d59f6b-xhmnx node/ip-10-0-252-241.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jul 09 11:04:47.840 E ns/openshift-marketplace pod/community-operators-59f96d9b6f-8pczh node/ip-10-0-252-241.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 09 11:04:48.993 E ns/openshift-marketplace pod/redhat-marketplace-55b8b6bbb7-49gbr node/ip-10-0-252-241.us-west-2.compute.internal container=redhat-marketplace container exited with code 2 (Error): 
Jul 09 11:04:51.658 E clusterversion/version changed Failing to True: ClusterOperatorNotAvailable: Cluster operator machine-config is still updating
Jul 09 11:05:07.603 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-151-7.us-west-2.compute.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-07-09T11:05:04.772Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-07-09T11:05:04.776Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-07-09T11:05:04.777Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-07-09T11:05:04.778Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-07-09T11:05:04.778Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-07-09T11:05:04.778Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-07-09T11:05:04.778Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-07-09T11:05:04.778Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-07-09T11:05:04.778Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-07-09T11:05:04.778Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-07-09T11:05:04.778Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-07-09T11:05:04.779Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-07-09T11:05:04.779Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-07-09T11:05:04.779Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-07-09T11:05:04.780Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-07-09T11:05:04.780Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-07-09
Jul 09 11:05:25.815 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node "ip-10-0-203-40.us-west-2.compute.internal" not ready since 2020-07-09 11:04:02 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-203-40.us-west-2.compute.internal is unhealthy
Jul 09 11:05:43.047 E ns/openshift-monitoring pod/node-exporter-5brbd node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:43.055 E ns/openshift-controller-manager pod/controller-manager-kt5xv node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:43.074 E ns/openshift-cluster-node-tuning-operator pod/tuned-7w5sk node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:43.091 E ns/openshift-image-registry pod/node-ca-v8w6n node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:43.098 E ns/openshift-sdn pod/ovs-l42xq node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:43.107 E ns/openshift-multus pod/multus-admission-controller-6wc6f node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:43.115 E ns/openshift-sdn pod/sdn-controller-5mjtt node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:43.125 E ns/openshift-multus pod/multus-hkdqn node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:43.133 E ns/openshift-dns pod/dns-default-nsrdf node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:43.140 E ns/openshift-machine-config-operator pod/machine-config-server-fpwmm node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:43.148 E ns/openshift-machine-config-operator pod/machine-config-daemon-57lmg node/ip-10-0-203-40.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:05:53.722 E ns/openshift-machine-config-operator pod/machine-config-daemon-57lmg node/ip-10-0-203-40.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 09 11:06:35.340 E ns/openshift-authentication-operator pod/authentication-operator-94d9fc485-n2s99 node/ip-10-0-172-138.us-west-2.compute.internal container=operator container exited with code 255 (Error): e":"Upgradeable"}]}}\nI0709 11:03:45.525726       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"4e713c62-14a3-4ab0-9ac0-a287bc668359", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "RouteHealthDegraded: failed to GET route: dial tcp: lookup oauth-openshift.apps.ci-op-mrbsy7t8-e8e57.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: read udp 10.130.0.65:40144->172.30.0.10:53: i/o timeout"\nI0709 11:03:56.773840       1 status_controller.go:176] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2020-07-09T10:21:11Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-07-09T11:02:39Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-07-09T10:28:26Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-07-09T10:10:44Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0709 11:03:56.790924       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"4e713c62-14a3-4ab0-9ac0-a287bc668359", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: dial tcp: lookup oauth-openshift.apps.ci-op-mrbsy7t8-e8e57.origin-ci-int-aws.dev.rhcloud.com on 172.30.0.10:53: read udp 10.130.0.65:40144->172.30.0.10:53: i/o timeout" to ""\nI0709 11:06:33.129954       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0709 11:06:33.130012       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0709 11:06:33.130056       1 builder.go:210] server exited\n
Jul 09 11:06:40.673 E ns/openshift-service-ca pod/service-ca-84cd88fc6d-btqcn node/ip-10-0-172-138.us-west-2.compute.internal container=service-ca-controller container exited with code 255 (Error): 
Jul 09 11:06:40.692 E ns/openshift-service-ca-operator pod/service-ca-operator-68757bd698-7nsrn node/ip-10-0-172-138.us-west-2.compute.internal container=operator container exited with code 255 (Error): 
Jul 09 11:06:56.904 E ns/openshift-console pod/console-8d6f449b9-pbsb7 node/ip-10-0-172-138.us-west-2.compute.internal container=console container exited with code 2 (Error): 2020-07-09T10:43:52Z cmd/main: cookies are secure!\n2020-07-09T10:43:52Z cmd/main: Binding to [::]:8443...\n2020-07-09T10:43:52Z cmd/main: using TLS\n
Jul 09 11:07:25.993 E ns/openshift-image-registry pod/node-ca-jqkjm node/ip-10-0-252-241.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:07:25.993 E ns/openshift-cluster-node-tuning-operator pod/tuned-kz4t9 node/ip-10-0-252-241.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:07:25.996 E ns/openshift-monitoring pod/node-exporter-sk2ct node/ip-10-0-252-241.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:07:26.018 E ns/openshift-sdn pod/ovs-58jxq node/ip-10-0-252-241.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:07:26.048 E ns/openshift-multus pod/multus-slghv node/ip-10-0-252-241.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:07:26.064 E ns/openshift-dns pod/dns-default-crjs2 node/ip-10-0-252-241.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:07:26.064 E ns/openshift-machine-config-operator pod/machine-config-daemon-qb2m2 node/ip-10-0-252-241.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:07:33.996 E ns/openshift-marketplace pod/redhat-operators-57b8d59f6b-lqddh node/ip-10-0-151-7.us-west-2.compute.internal container=redhat-operators container exited with code 2 (Error): 
Jul 09 11:07:35.131 E ns/openshift-machine-config-operator pod/machine-config-daemon-qb2m2 node/ip-10-0-252-241.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 09 11:07:49.034 E ns/openshift-marketplace pod/community-operators-59f96d9b6f-7mmkn node/ip-10-0-151-7.us-west-2.compute.internal container=community-operators container exited with code 2 (Error): 
Jul 09 11:08:20.120 E kube-apiserver failed contacting the API: Get https://api.ci-op-mrbsy7t8-e8e57.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=46399&timeout=9m25s&timeoutSeconds=565&watch=true: dial tcp 52.27.40.6:6443: connect: connection refused
Jul 09 11:08:20.123 E kube-apiserver failed contacting the API: Get https://api.ci-op-mrbsy7t8-e8e57.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=46436&timeout=7m46s&timeoutSeconds=466&watch=true: dial tcp 52.27.40.6:6443: connect: connection refused
Jul 09 11:09:48.298 E ns/openshift-cluster-node-tuning-operator pod/tuned-5vngz node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:48.310 E ns/openshift-controller-manager pod/controller-manager-zgsxm node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:48.326 E ns/openshift-image-registry pod/node-ca-kh9k2 node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:48.342 E ns/openshift-monitoring pod/node-exporter-nv9r4 node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:48.352 E ns/openshift-sdn pod/sdn-controller-svrgp node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:48.377 E ns/openshift-multus pod/multus-admission-controller-qnd5j node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:48.414 E ns/openshift-sdn pod/ovs-xqwxp node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:48.425 E ns/openshift-multus pod/multus-9pg9g node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:48.434 E ns/openshift-dns pod/dns-default-62sz9 node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:48.444 E ns/openshift-machine-config-operator pod/machine-config-daemon-stc8c node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:48.451 E ns/openshift-machine-config-operator pod/machine-config-server-ltt8f node/ip-10-0-172-138.us-west-2.compute.internal invariant violation: pod may not transition Running->Pending
Jul 09 11:09:52.500 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: 2 of 3 members are available, ip-10-0-172-138.us-west-2.compute.internal is unhealthy
Jul 09 11:09:57.235 E ns/openshift-machine-config-operator pod/machine-config-daemon-stc8c node/ip-10-0-172-138.us-west-2.compute.internal container=oauth-proxy container exited with code 1 (Error): 
Jul 09 11:12:19.377 E clusterversion/version changed Failing to True: WorkloadNotAvailable: deployment openshift-machine-config-operator/etcd-quorum-guard is progressing ReplicaSetUpdated: ReplicaSet "etcd-quorum-guard-76b4774fd8" is progressing.