ResultSUCCESS
Tests 3 failed / 24 succeeded
Started2020-05-18 21:58
Elapsed1h18m
Work namespaceci-op-d2zn5gn9
Refs openshift-4.5:21a72caa
48:024ac3bd
poda8756a5a-9952-11ea-9b2f-0a580a820526
repoopenshift/etcd
revision1

Test Failures


Cluster upgrade Kubernetes APIs remain available 32m30s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sKubernetes\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 2s of 32m29s (0%):

May 18 22:54:14.451 E kube-apiserver Kube API started failing: Get https://api.ci-op-d2zn5gn9-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=15s: unexpected EOF
May 18 22:54:15.440 E kube-apiserver Kube API is not responding to GET requests
May 18 22:54:15.517 I kube-apiserver Kube API started responding to GET requests
				from junit_upgrade_1589843220.xml

Filter through log files


Cluster upgrade OpenShift APIs remain available 32m30s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Cluster\supgrade\sOpenShift\sAPIs\sremain\savailable$'
API was unreachable during disruption for at least 1s of 32m29s (0%):

May 18 22:51:56.004 I openshift-apiserver OpenShift API stopped responding to GET requests: rpc error: code = Unavailable desc = transport is closing
May 18 22:51:56.910 E openshift-apiserver OpenShift API is not responding to GET requests
May 18 22:51:56.986 I openshift-apiserver OpenShift API started responding to GET requests
				from junit_upgrade_1589843220.xml

Filter through log files


openshift-tests Monitor cluster while tests execute 37m57s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
112 error level events were detected during this test run:

May 18 22:29:17.462 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-58.us-west-1.compute.internal node/ip-10-0-150-58.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0518 22:29:17.111520       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:29:17.111533       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:29:17.111542       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:29:17.111552       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:29:17.111562       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:29:17.111572       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:29:17.111586       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:29:17.111597       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:29:17.111608       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:29:17.111627       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:29:17.111636       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:29:17.111642       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:29:17.111647       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:29:17.111653       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:29:17.111675       1 server.go:681] external host was not specified, using 10.0.150.58\nI0518 22:29:17.111851       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:29:17.112145       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:29:49.612 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-58.us-west-1.compute.internal node/ip-10-0-150-58.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0518 22:29:49.104065       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:29:49.104081       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:29:49.104090       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:29:49.104099       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:29:49.104109       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:29:49.104117       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:29:49.104134       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:29:49.104145       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:29:49.104155       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:29:49.104167       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:29:49.104178       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:29:49.104188       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:29:49.104198       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:29:49.104208       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:29:49.104243       1 server.go:681] external host was not specified, using 10.0.150.58\nI0518 22:29:49.104423       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:29:49.104691       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:30:14.763 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-150-58.us-west-1.compute.internal node/ip-10-0-150-58.us-west-1.compute.internal container/kube-controller-manager container exited with code 255 (Error): rceVersion=19626&timeout=7m28s&timeoutSeconds=448&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:30:14.142892       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://localhost:6443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=16933&timeout=7m41s&timeoutSeconds=461&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:30:14.143906       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/servicemonitors?allowWatchBookmarks=true&resourceVersion=21509&timeout=5m26s&timeoutSeconds=326&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:30:14.145492       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/authentications?allowWatchBookmarks=true&resourceVersion=19705&timeout=6m30s&timeoutSeconds=390&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0518 22:30:14.145897       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nE0518 22:30:14.146520       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/operatorhubs?allowWatchBookmarks=true&resourceVersion=19592&timeout=9m21s&timeoutSeconds=561&watch=true: dial tcp [::1]:6443: connect: connection refused\nF0518 22:30:14.146641       1 controllermanager.go:291] leaderelection lost\nI0518 22:30:14.146093       1 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"", Name:"", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-10-0-150-58_a6131fc3-1b6f-4423-83a1-987dd91e3dbd stopped leading\n
May 18 22:30:39.864 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-150-58.us-west-1.compute.internal node/ip-10-0-150-58.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): r scope\nE0518 22:30:37.651666       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.ImageStream: imagestreams.image.openshift.io is forbidden: User "system:kube-controller-manager" cannot list resource "imagestreams" in API group "image.openshift.io" at the cluster scope\nE0518 22:30:37.741280       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-controller-manager" cannot list resource "namespaces" in API group "" at the cluster scope\nE0518 22:30:37.741280       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.ResourceQuota: resourcequotas is forbidden: User "system:kube-controller-manager" cannot list resource "resourcequotas" in API group "" at the cluster scope\nE0518 22:30:37.741539       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-controller-manager" cannot list resource "statefulsets" in API group "apps" at the cluster scope\nE0518 22:30:37.741820       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.ServiceAccount: serviceaccounts is forbidden: User "system:kube-controller-manager" cannot list resource "serviceaccounts" in API group "" at the cluster scope\nE0518 22:30:37.745142       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.ControllerRevision: controllerrevisions.apps is forbidden: User "system:kube-controller-manager" cannot list resource "controllerrevisions" in API group "apps" at the cluster scope\nE0518 22:30:37.745312       1 reflector.go:178] runtime/asm_amd64.s:1357: Failed to list *v1.PodTemplate: podtemplates is forbidden: User "system:kube-controller-manager" cannot list resource "podtemplates" in API group "" at the cluster scope\nI0518 22:30:38.786098       1 leaderelection.go:277] failed to renew lease openshift-kube-controller-manager/cluster-policy-controller: timed out waiting for the condition\nF0518 22:30:38.786161       1 policy_controller.go:94] leaderelection lost\n
May 18 22:34:05.535 E ns/openshift-machine-api pod/machine-api-operator-f956f5c6f-tspjn node/ip-10-0-251-4.us-west-1.compute.internal container/machine-api-operator container exited with code 2 (Error): 
May 18 22:34:29.755 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-251-4.us-west-1.compute.internal node/ip-10-0-251-4.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0518 22:34:28.350514       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0518 22:34:28.354139       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0518 22:34:28.356321       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0518 22:34:28.361458       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 18 22:34:29.988 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-58.us-west-1.compute.internal node/ip-10-0-150-58.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0518 22:34:28.244882       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:34:28.244896       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:34:28.244906       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:34:28.244916       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:34:28.244926       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:34:28.244936       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:34:28.244949       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:34:28.244960       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:34:28.244975       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:34:28.244986       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:34:28.244997       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:34:28.245007       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:34:28.245017       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:34:28.245027       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:34:28.245287       1 server.go:681] external host was not specified, using 10.0.150.58\nI0518 22:34:28.245509       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:34:28.245763       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:34:47.828 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-251-4.us-west-1.compute.internal node/ip-10-0-251-4.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0518 22:34:47.710362       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0518 22:34:47.716565       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0518 22:34:47.716679       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0518 22:34:47.721203       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 18 22:34:51.153 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-58.us-west-1.compute.internal node/ip-10-0-150-58.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0518 22:34:50.148359       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:34:50.148372       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:34:50.148379       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:34:50.148385       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:34:50.148391       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:34:50.148396       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:34:50.148405       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:34:50.148411       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:34:50.148417       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:34:50.148423       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:34:50.148429       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:34:50.148435       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:34:50.148442       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:34:50.148452       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:34:50.148483       1 server.go:681] external host was not specified, using 10.0.150.58\nI0518 22:34:50.148654       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:34:50.148902       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:35:13.265 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-150-58.us-west-1.compute.internal node/ip-10-0-150-58.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): IPRanger"\nI0518 22:35:12.279957       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:35:12.279970       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:35:12.279979       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:35:12.279988       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:35:12.279996       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:35:12.280005       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:35:12.280017       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:35:12.280027       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:35:12.280036       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:35:12.280046       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:35:12.280073       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:35:12.280083       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:35:12.280092       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:35:12.280101       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:35:12.280135       1 server.go:681] external host was not specified, using 10.0.150.58\nI0518 22:35:12.280343       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:35:12.280612       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:35:17.845 E ns/openshift-controller-manager pod/controller-manager-zx6cx node/ip-10-0-169-128.us-west-1.compute.internal container/controller-manager container exited with code 137 (Error): I0518 22:19:25.556356       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (554623c)\nI0518 22:19:25.558757       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-d2zn5gn9/stable-initial@sha256:6c9a53a8fd544809f8d2e563d80c87cd6ae35105ec6ae7785ba718d56db14655"\nI0518 22:19:25.558988       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-d2zn5gn9/stable-initial@sha256:579be1a4b551c32690f221641c5f4c18a54022e4571a45055696b3bada85fd1a"\nI0518 22:19:25.558930       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0518 22:19:25.560368       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
May 18 22:35:17.971 E ns/openshift-controller-manager pod/controller-manager-zf7sc node/ip-10-0-251-4.us-west-1.compute.internal container/controller-manager container exited with code 137 (Error): I0518 22:19:35.157623       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (554623c)\nI0518 22:19:35.159362       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-d2zn5gn9/stable-initial@sha256:6c9a53a8fd544809f8d2e563d80c87cd6ae35105ec6ae7785ba718d56db14655"\nI0518 22:19:35.159388       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-d2zn5gn9/stable-initial@sha256:579be1a4b551c32690f221641c5f4c18a54022e4571a45055696b3bada85fd1a"\nI0518 22:19:35.159534       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0518 22:19:35.159554       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
May 18 22:35:59.102 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-169-128.us-west-1.compute.internal node/ip-10-0-169-128.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0518 22:35:58.420118       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0518 22:35:58.428425       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0518 22:35:58.429132       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 18 22:36:14.181 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-169-128.us-west-1.compute.internal node/ip-10-0-169-128.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0518 22:36:13.287686       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0518 22:36:13.289555       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0518 22:36:13.289600       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0518 22:36:13.290348       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 18 22:36:15.590 E ns/openshift-machine-api pod/machine-api-controllers-567bc98f69-kzsmk node/ip-10-0-150-58.us-west-1.compute.internal container/machineset-controller container exited with code 1 (Error): 
May 18 22:36:19.205 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-5654465b54-clk4m node/ip-10-0-169-128.us-west-1.compute.internal container/kube-storage-version-migrator-operator container exited with code 1 (Error):     1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0518 22:25:21.933218       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0518 22:36:18.568079       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0518 22:36:18.568449       1 builder.go:248] server exited\nI0518 22:36:18.568531       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\nI0518 22:36:18.568552       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0518 22:36:18.568646       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController\nI0518 22:36:18.568673       1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\nI0518 22:36:18.568699       1 secure_serving.go:222] Stopped listening on [::]:8443\nI0518 22:36:18.568742       1 base_controller.go:101] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0518 22:36:18.568766       1 controller.go:123] Shutting down KubeStorageVersionMigratorOperator\nI0518 22:36:18.568779       1 base_controller.go:101] Shutting down LoggingSyncer ...\nI0518 22:36:18.568807       1 base_controller.go:58] Shutting down worker of StatusSyncer_kube-storage-version-migrator controller ...\nI0518 22:36:18.568825       1 base_controller.go:48] All StatusSyncer_kube-storage-version-migrator workers have been terminated\nI0518 22:36:18.568838       1 base_controller.go:58] Shutting down worker of LoggingSyncer controller ...\nI0518 22:36:18.568844       1 base_controller.go:48] All LoggingSyncer workers have been terminated\nW0518 22:36:18.568869       1 builder.go:94] graceful termination failed, controllers failed with error: stopped\nI0518 22:36:18.568888       1 reflector.go:181] Stopping reflector *v1.ClusterOperator (10m0s) from runtime/asm_amd64.s:1357\n
May 18 22:36:34.286 E ns/openshift-cluster-machine-approver pod/machine-approver-fcc85dc46-2b94q node/ip-10-0-169-128.us-west-1.compute.internal container/machine-approver-controller container exited with code 2 (Error): e%3Dmachine-approver&resourceVersion=20675&timeoutSeconds=569&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0518 22:27:38.362771       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=16170&timeoutSeconds=504&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0518 22:27:39.363107       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=20675&timeoutSeconds=593&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0518 22:27:39.363359       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=16170&timeoutSeconds=576&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0518 22:27:40.363839       1 reflector.go:380] github.com/openshift/cluster-machine-approver/status.go:98: Failed to watch *v1.ClusterOperator: Get https://127.0.0.1:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dmachine-approver&resourceVersion=20675&timeoutSeconds=327&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\nE0518 22:27:40.364774       1 reflector.go:380] github.com/openshift/cluster-machine-approver/main.go:239: Failed to watch *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=16170&timeoutSeconds=447&watch=true: dial tcp 127.0.0.1:6443: connect: connection refused\n
May 18 22:36:39.036 E ns/openshift-kube-storage-version-migrator pod/migrator-7ccfd8b678-pdwpz node/ip-10-0-233-6.us-west-1.compute.internal container/migrator container exited with code 2 (Error): 
May 18 22:36:43.323 E ns/openshift-insights pod/insights-operator-6f9f66c6bb-xlzst node/ip-10-0-169-128.us-west-1.compute.internal container/operator container exited with code 2 (Error): 7767       1 insightsuploader.go:150] Uploaded report successfully in 897.743558ms\nI0518 22:33:34.637793       1 status.go:89] Initializing last reported time to 2020-05-18T22:33:33Z\nI0518 22:33:34.643371       1 status.go:298] The operator is healthy\nI0518 22:33:38.230939       1 httplog.go:90] GET /metrics: (10.650809ms) 200 [Prometheus/2.15.2 10.129.2.15:38586]\nI0518 22:33:54.299177       1 httplog.go:90] GET /metrics: (9.36257ms) 200 [Prometheus/2.15.2 10.131.0.7:57700]\nI0518 22:34:08.230206       1 httplog.go:90] GET /metrics: (9.909841ms) 200 [Prometheus/2.15.2 10.129.2.15:38586]\nI0518 22:34:24.300252       1 httplog.go:90] GET /metrics: (10.35834ms) 200 [Prometheus/2.15.2 10.131.0.7:57700]\nI0518 22:34:33.717390       1 status.go:298] The operator is healthy\nI0518 22:34:38.232641       1 httplog.go:90] GET /metrics: (12.314093ms) 200 [Prometheus/2.15.2 10.129.2.15:38586]\nI0518 22:34:54.298188       1 httplog.go:90] GET /metrics: (8.261274ms) 200 [Prometheus/2.15.2 10.131.0.7:57700]\nI0518 22:35:08.228966       1 httplog.go:90] GET /metrics: (8.409371ms) 200 [Prometheus/2.15.2 10.129.2.15:38586]\nI0518 22:35:24.298709       1 httplog.go:90] GET /metrics: (9.015317ms) 200 [Prometheus/2.15.2 10.131.0.7:57700]\nI0518 22:35:30.599600       1 reflector.go:418] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Watch close - *v1.ConfigMap total 1 items received\nI0518 22:35:38.230479       1 httplog.go:90] GET /metrics: (10.116741ms) 200 [Prometheus/2.15.2 10.129.2.15:38586]\nI0518 22:35:54.299925       1 httplog.go:90] GET /metrics: (10.141724ms) 200 [Prometheus/2.15.2 10.131.0.7:57700]\nI0518 22:36:08.229541       1 httplog.go:90] GET /metrics: (9.070818ms) 200 [Prometheus/2.15.2 10.129.2.15:38586]\nI0518 22:36:24.299071       1 httplog.go:90] GET /metrics: (9.294918ms) 200 [Prometheus/2.15.2 10.131.0.7:57700]\nI0518 22:36:33.720447       1 status.go:298] The operator is healthy\nI0518 22:36:38.229356       1 httplog.go:90] GET /metrics: (9.01695ms) 200 [Prometheus/2.15.2 10.129.2.15:38586]\n
May 18 22:37:23.547 E ns/openshift-service-ca-operator pod/service-ca-operator-5f844dd774-w9mh2 node/ip-10-0-169-128.us-west-1.compute.internal container/operator container exited with code 1 (Error): 
May 18 22:37:25.384 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-150-58.us-west-1.compute.internal node/ip-10-0-150-58.us-west-1.compute.internal container/cluster-policy-controller container exited with code 255 (Error): I0518 22:37:24.036711       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0518 22:37:24.106208       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0518 22:37:24.118402       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
May 18 22:37:28.355 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-251-4.us-west-1.compute.internal node/ip-10-0-251-4.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): lIPRanger"\nI0518 22:37:26.111859       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:37:26.111910       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:37:26.111955       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:37:26.112012       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:37:26.112059       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:37:26.112104       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:37:26.112155       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:37:26.112207       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:37:26.112297       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:37:26.112351       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:37:26.112398       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:37:26.112444       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:37:26.112488       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:37:26.112534       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:37:26.112613       1 server.go:681] external host was not specified, using 10.0.251.4\nI0518 22:37:26.112960       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:37:26.113497       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:37:33.218 E ns/openshift-monitoring pod/node-exporter-nmv98 node/ip-10-0-169-128.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): -18T22:18:52Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-05-18T22:18:52Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
May 18 22:37:34.556 E ns/openshift-monitoring pod/openshift-state-metrics-6d78f6f6df-s52mw node/ip-10-0-189-73.us-west-1.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
May 18 22:37:39.339 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-6.us-west-1.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/05/18 22:24:48 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/05/18 22:24:50 config map updated\n2020/05/18 22:24:50 error: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused\n2020/05/18 22:29:35 config map updated\n2020/05/18 22:29:36 successfully triggered reload\n
May 18 22:37:39.339 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-6.us-west-1.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/05/18 22:24:48 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/05/18 22:24:48 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/18 22:24:48 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/18 22:24:48 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/05/18 22:24:48 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/18 22:24:48 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/05/18 22:24:48 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/18 22:24:48 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/05/18 22:24:48 http.go:107: HTTPS: listening on [::]:9091\nI0518 22:24:48.775045       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/18 22:25:17 oauthproxy.go:774: basicauth: 10.129.2.7:55530 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:29:48 oauthproxy.go:774: basicauth: 10.129.2.7:58556 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:34:18 oauthproxy.go:774: basicauth: 10.129.2.7:33302 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:37:00 oauthproxy.go:774: basicauth: 10.128.0.65:44122 Authorization header does not start with 'Basic', skipping basic authentication\n
May 18 22:37:39.339 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-6.us-west-1.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-05-18T22:24:47.925440998Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-05-18T22:24:47.926825279Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-05-18T22:24:53.053679757Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-18T22:24:53.053766419Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-05-18T22:24:53.171837225Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-18T22:27:53.149869927Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-18T22:30:53.189064885Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
May 18 22:37:40.933 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-251-4.us-west-1.compute.internal node/ip-10-0-251-4.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): lIPRanger"\nI0518 22:37:39.731662       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:37:39.731676       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:37:39.731686       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:37:39.731695       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:37:39.731708       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:37:39.731718       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:37:39.731735       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:37:39.731746       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:37:39.731756       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:37:39.731768       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:37:39.731779       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:37:39.731788       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:37:39.731799       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:37:39.731809       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:37:39.731844       1 server.go:681] external host was not specified, using 10.0.251.4\nI0518 22:37:39.732020       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:37:39.732323       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:37:54.570 E ns/openshift-monitoring pod/node-exporter-t42c8 node/ip-10-0-150-58.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): -18T22:18:49Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-05-18T22:18:49Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
May 18 22:37:59.618 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-6.us-west-1.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-18T22:37:50.555Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-18T22:37:50.561Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-18T22:37:50.561Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-18T22:37:50.562Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-18T22:37:50.562Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-18T22:37:50.562Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-18T22:37:50.562Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-18T22:37:50.562Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-18T22:37:50.562Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-18T22:37:50.562Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-18T22:37:50.562Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-18T22:37:50.562Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-18T22:37:50.562Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-18T22:37:50.562Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-18T22:37:50.563Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-18T22:37:50.563Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-18
May 18 22:38:02.904 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-251-4.us-west-1.compute.internal node/ip-10-0-251-4.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): lIPRanger"\nI0518 22:38:02.607988       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:38:02.608001       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:38:02.608019       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:38:02.608029       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:38:02.608038       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:38:02.608047       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:38:02.608060       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:38:02.608083       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:38:02.608094       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:38:02.608104       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:38:02.608116       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:38:02.608126       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:38:02.608147       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:38:02.608167       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:38:02.608214       1 server.go:681] external host was not specified, using 10.0.251.4\nI0518 22:38:02.608531       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:38:02.608966       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:38:04.922 E ns/openshift-monitoring pod/grafana-b5dbf4d5d-b5dtq node/ip-10-0-189-73.us-west-1.compute.internal container/grafana container exited with code 1 (Error): 
May 18 22:38:04.959 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-189-73.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): 2020/05/18 22:24:38 Watching directory: "/etc/alertmanager/config"\n
May 18 22:38:04.959 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-189-73.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/05/18 22:24:38 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/18 22:24:38 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/18 22:24:38 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/18 22:24:38 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/05/18 22:24:38 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/18 22:24:38 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/18 22:24:38 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/18 22:24:38 http.go:107: HTTPS: listening on [::]:9095\nI0518 22:24:38.467080       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
May 18 22:38:05.668 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-99.us-west-1.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/05/18 22:24:55 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n2020/05/18 22:24:56 config map updated\n2020/05/18 22:24:56 error: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused\n2020/05/18 22:29:05 config map updated\n2020/05/18 22:29:05 successfully triggered reload\n
May 18 22:38:05.668 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-99.us-west-1.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/05/18 22:24:55 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/05/18 22:24:55 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/18 22:24:55 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/18 22:24:55 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/05/18 22:24:55 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/18 22:24:55 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/05/18 22:24:55 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/18 22:24:55 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0518 22:24:55.958550       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/18 22:24:55 http.go:107: HTTPS: listening on [::]:9091\n2020/05/18 22:37:39 oauthproxy.go:774: basicauth: 10.128.2.22:38874 Authorization header does not start with 'Basic', skipping basic authentication\n
May 18 22:38:05.668 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-99.us-west-1.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-05-18T22:24:55.332631466Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-05-18T22:24:55.333864309Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-05-18T22:25:00.436893106Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-18T22:25:00.437007008Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\nlevel=info ts=2020-05-18T22:25:00.542685441Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-18T22:28:00.524481341Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-18T22:31:00.656112285Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
May 18 22:38:12.668 E ns/openshift-monitoring pod/node-exporter-9xrgf node/ip-10-0-154-99.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): -18T22:22:39Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-05-18T22:22:39Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
May 18 22:38:19.749 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-99.us-west-1.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-18T22:38:11.998Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-18T22:38:12.003Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-18T22:38:12.004Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-18T22:38:12.005Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-18T22:38:12.005Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-18T22:38:12.005Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-18T22:38:12.005Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-18T22:38:12.005Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-18T22:38:12.005Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-18T22:38:12.005Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-18T22:38:12.005Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-18T22:38:12.005Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-18T22:38:12.005Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-18T22:38:12.005Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-18T22:38:12.006Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-18T22:38:12.006Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-18
May 18 22:38:30.707 E ns/openshift-monitoring pod/node-exporter-sp6j6 node/ip-10-0-233-6.us-west-1.compute.internal container/node-exporter container exited with code 143 (Error): -18T22:22:57Z" level=info msg=" - filefd" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - filesystem" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - infiniband" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - ipvs" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - loadavg" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - mdadm" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - meminfo" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - mountstats" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - netclass" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - netdev" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - netstat" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - nfs" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - nfsd" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - pressure" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - sockstat" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - stat" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - textfile" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - time" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - timex" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - uname" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - vmstat" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - xfs" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg=" - zfs" source="node_exporter.go:104"\ntime="2020-05-18T22:22:57Z" level=info msg="Listening on 127.0.0.1:9100" source="node_exporter.go:170"\n
May 18 22:38:42.013 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-251-4.us-west-1.compute.internal node/ip-10-0-251-4.us-west-1.compute.internal container/kube-controller-manager container exited with code 255 (Error): ion refused\nE0518 22:38:41.434554       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/ingress.operator.openshift.io/v1/dnsrecords?allowWatchBookmarks=true&resourceVersion=27764&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:38:41.435844       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://localhost:6443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=30945&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:38:41.436991       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/monitoring.coreos.com/v1/thanosrulers?allowWatchBookmarks=true&resourceVersion=22605&timeout=9m15s&timeoutSeconds=555&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0518 22:38:41.525656       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0518 22:38:41.525766       1 controllermanager.go:291] leaderelection lost\nI0518 22:38:41.540019       1 attach_detach_controller.go:374] Shutting down attach detach controller\nI0518 22:38:41.540037       1 endpointslice_controller.go:229] Shutting down endpoint slice controller\nI0518 22:38:41.540043       1 node_lifecycle_controller.go:593] Shutting down node controller\nI0518 22:38:41.540047       1 replica_set.go:193] Shutting down replicaset controller\nI0518 22:38:41.540070       1 endpoints_controller.go:199] Shutting down endpoint controller\nI0518 22:38:41.540078       1 gc_controller.go:100] Shutting down GC controller\nI0518 22:38:41.540094       1 pvc_protection_controller.go:113] Shutting down PVC protection controller\nI0518 22:38:41.540102       1 horizontal.go:180] Shutting down HPA controller\n
May 18 22:38:58.120 E ns/openshift-controller-manager pod/controller-manager-q4bgh node/ip-10-0-251-4.us-west-1.compute.internal container/controller-manager container exited with code 137 (Error): I0518 22:37:25.240368       1 controller_manager.go:39] Starting controllers on 0.0.0.0:8443 (554623c)\nI0518 22:37:25.242668       1 controller_manager.go:50] DeploymentConfig controller using images from "registry.svc.ci.openshift.org/ci-op-d2zn5gn9/stable@sha256:6c9a53a8fd544809f8d2e563d80c87cd6ae35105ec6ae7785ba718d56db14655"\nI0518 22:37:25.242688       1 controller_manager.go:56] Build controller using images from "registry.svc.ci.openshift.org/ci-op-d2zn5gn9/stable@sha256:579be1a4b551c32690f221641c5f4c18a54022e4571a45055696b3bada85fd1a"\nI0518 22:37:25.242786       1 standalone_apiserver.go:98] Started health checks at 0.0.0.0:8443\nI0518 22:37:25.242842       1 leaderelection.go:242] attempting to acquire leader lease  openshift-controller-manager/openshift-master-controllers...\n
May 18 22:38:58.813 E ns/openshift-marketplace pod/certified-operators-c9689c974-zjk5j node/ip-10-0-233-6.us-west-1.compute.internal container/certified-operators container exited with code 2 (Error): 
May 18 22:39:15.725 E ns/openshift-console pod/console-84bb769d8f-kbkns node/ip-10-0-169-128.us-west-1.compute.internal container/console container exited with code 2 (Error): 18T22:24:40Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:24:50Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:00Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:10Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:20Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:30Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:40Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:51Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:26:01Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:26:11Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:26:21Z cmd/main: Binding to [::]:8443...\n2020-05-18T22:26:21Z cmd/main: using TLS\n
May 18 22:39:32.172 E ns/openshift-console pod/console-84bb769d8f-9npfs node/ip-10-0-251-4.us-west-1.compute.internal container/console container exited with code 2 (Error): 2020-05-18T22:24:23Z cmd/main: cookies are secure!\n2020-05-18T22:24:23Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:24:33Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:24:43Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:24:53Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:03Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:13Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:23Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:33Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:43Z auth: error contacting auth provider (retrying in 10s): discovery through endpoint https://kubernetes.default.svc/.well-known/oauth-authorization-server failed: 404 Not Found\n2020-05-18T22:25:53Z cmd/main: Binding to [::]:8443...\n2020-05-18T22:25:53Z cmd/main: using TLS\n
May 18 22:39:56.963 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-128.us-west-1.compute.internal node/ip-10-0-169-128.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0518 22:39:55.265920       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:39:55.265938       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:39:55.265963       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:39:55.265976       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:39:55.265988       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:39:55.265999       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:39:55.266020       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:39:55.266035       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:39:55.266050       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:39:55.266063       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:39:55.266076       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:39:55.266088       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:39:55.266101       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:39:55.266114       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:39:55.266159       1 server.go:681] external host was not specified, using 10.0.169.128\nI0518 22:39:55.266467       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:39:55.266895       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:40:12.340 E ns/openshift-sdn pod/sdn-controller-snvrs node/ip-10-0-251-4.us-west-1.compute.internal container/sdn-controller container exited with code 2 (Error): I0518 22:13:51.272932       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0518 22:16:20.905703       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: etcdserver: request timed out\nE0518 22:18:12.344940       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-d2zn5gn9-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
May 18 22:40:15.044 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-128.us-west-1.compute.internal node/ip-10-0-169-128.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0518 22:40:14.238245       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:40:14.238260       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:40:14.238270       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:40:14.238281       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:40:14.238290       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:40:14.238299       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:40:14.238316       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:40:14.238337       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:40:14.238348       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:40:14.238360       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:40:14.238371       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:40:14.238381       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:40:14.238392       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:40:14.238402       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:40:14.238436       1 server.go:681] external host was not specified, using 10.0.169.128\nI0518 22:40:14.238626       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:40:14.238886       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:40:18.019 E ns/openshift-sdn pod/sdn-xdzrs node/ip-10-0-233-6.us-west-1.compute.internal container/sdn container exited with code 255 (Error): 07 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.150.58:6443 10.0.251.4:6443]\nI0518 22:39:52.037479    2307 roundrobin.go:217] Delete endpoint 10.0.169.128:6443 for service "openshift-kube-apiserver/apiserver:https"\nI0518 22:39:52.083420    2307 roundrobin.go:267] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [10.0.150.58:6443 10.0.251.4:6443]\nI0518 22:39:52.083446    2307 roundrobin.go:217] Delete endpoint 10.0.169.128:6443 for service "default/kubernetes:https"\nI0518 22:39:52.175373    2307 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:39:52.175814    2307 proxier.go:349] userspace syncProxyRules took 27.884975ms\nI0518 22:39:52.311841    2307 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:39:52.312371    2307 proxier.go:349] userspace syncProxyRules took 28.369698ms\nI0518 22:40:09.572708    2307 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.4:6443 10.130.0.5:6443]\nI0518 22:40:09.572766    2307 roundrobin.go:217] Delete endpoint 10.128.0.12:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0518 22:40:09.572792    2307 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.4:8443 10.130.0.5:8443]\nI0518 22:40:09.572807    2307 roundrobin.go:217] Delete endpoint 10.128.0.12:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0518 22:40:09.716578    2307 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:40:09.717075    2307 proxier.go:349] userspace syncProxyRules took 28.941385ms\nI0518 22:40:17.640365    2307 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0518 22:40:17.640410    2307 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
May 18 22:40:27.219 E ns/openshift-sdn pod/sdn-controller-rzv8r node/ip-10-0-150-58.us-west-1.compute.internal container/sdn-controller container exited with code 2 (Error): I0518 22:13:50.937794       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0518 22:18:12.345041       1 leaderelection.go:320] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-d2zn5gn9-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: unexpected EOF\n
May 18 22:40:33.040 E ns/openshift-sdn pod/sdn-controller-cbffs node/ip-10-0-169-128.us-west-1.compute.internal container/sdn-controller container exited with code 2 (Error): watcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0518 22:25:22.037738       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0518 22:29:07.496832       1 vnids.go:116] Allocated netid 10342640 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-7518"\nI0518 22:29:07.511140       1 vnids.go:116] Allocated netid 6334946 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-5828"\nI0518 22:29:07.529048       1 vnids.go:116] Allocated netid 12946554 for namespace "e2e-frontend-ingress-available-2191"\nI0518 22:29:07.543561       1 vnids.go:116] Allocated netid 3908995 for namespace "e2e-check-for-critical-alerts-8693"\nI0518 22:29:07.567213       1 vnids.go:116] Allocated netid 14355933 for namespace "e2e-kubernetes-api-available-530"\nI0518 22:29:07.588760       1 vnids.go:116] Allocated netid 15817172 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-5471"\nI0518 22:29:07.632298       1 vnids.go:116] Allocated netid 12126874 for namespace "e2e-k8s-sig-apps-deployment-upgrade-8050"\nI0518 22:29:07.690869       1 vnids.go:116] Allocated netid 5783780 for namespace "e2e-k8s-sig-apps-job-upgrade-4672"\nI0518 22:29:07.862836       1 vnids.go:116] Allocated netid 11964980 for namespace "e2e-k8s-service-lb-available-9382"\nI0518 22:29:07.925904       1 vnids.go:116] Allocated netid 9706341 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-8674"\nI0518 22:29:08.052726       1 vnids.go:116] Allocated netid 12266183 for namespace "e2e-openshift-api-available-583"\nI0518 22:38:31.137026       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0518 22:38:31.137052       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0518 22:38:31.137071       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0518 22:38:31.137053       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
May 18 22:40:40.666 E ns/openshift-multus pod/multus-tdhf2 node/ip-10-0-251-4.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
May 18 22:40:46.202 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-169-128.us-west-1.compute.internal node/ip-10-0-169-128.us-west-1.compute.internal container/kube-apiserver container exited with code 1 (Error): PRanger"\nI0518 22:40:45.306029       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0518 22:40:45.306045       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0518 22:40:45.306055       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0518 22:40:45.306065       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0518 22:40:45.306074       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0518 22:40:45.306083       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0518 22:40:45.306100       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0518 22:40:45.306112       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0518 22:40:45.306122       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0518 22:40:45.306134       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0518 22:40:45.306145       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0518 22:40:45.306157       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0518 22:40:45.306167       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0518 22:40:45.306177       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0518 22:40:45.306211       1 server.go:681] external host was not specified, using 10.0.169.128\nI0518 22:40:45.306369       1 server.go:724] Initializing cache sizes based on 0MB limit\nI0518 22:40:45.306725       1 server.go:193] Version: v1.18.2\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
May 18 22:41:14.325 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-169-128.us-west-1.compute.internal node/ip-10-0-169-128.us-west-1.compute.internal container/kube-controller-manager container exited with code 255 (Error): utSeconds=577&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:41:13.335676       1 reflector.go:382] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.RangeAllocation: Get https://localhost:6443/apis/security.openshift.io/v1/rangeallocations?allowWatchBookmarks=true&resourceVersion=31115&timeout=7m13s&timeoutSeconds=433&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:41:13.336919       1 reflector.go:382] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.Build: Get https://localhost:6443/apis/build.openshift.io/v1/builds?allowWatchBookmarks=true&resourceVersion=31115&timeout=8m11s&timeoutSeconds=491&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:41:13.338082       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/ingresscontrollers?allowWatchBookmarks=true&resourceVersion=29041&timeout=7m32s&timeoutSeconds=452&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:41:13.339151       1 reflector.go:382] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operators.coreos.com/v1/operatorsources?allowWatchBookmarks=true&resourceVersion=29625&timeout=5m15s&timeoutSeconds=315&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0518 22:41:13.598084       1 leaderelection.go:277] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0518 22:41:13.598179       1 controllermanager.go:291] leaderelection lost\nI0518 22:41:13.610964       1 certificate_controller.go:131] Shutting down certificate controller "csrsigning"\nI0518 22:41:13.610998       1 expand_controller.go:331] Shutting down expand controller\nI0518 22:41:13.611164       1 gc_controller.go:100] Shutting down GC controller\n
May 18 22:41:14.365 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-169-128.us-west-1.compute.internal node/ip-10-0-169-128.us-west-1.compute.internal container/kube-scheduler container exited with code 255 (Error): : Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=25483&timeout=7m53s&timeoutSeconds=473&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:41:13.177566       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=25478&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:41:13.179099       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:6443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=25483&timeout=7m51s&timeoutSeconds=471&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:41:13.180220       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=32641&timeout=6m20s&timeoutSeconds=380&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:41:13.181286       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=25483&timeout=9m45s&timeoutSeconds=585&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0518 22:41:13.184047       1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=28963&timeout=9m24s&timeoutSeconds=564&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0518 22:41:13.631005       1 leaderelection.go:277] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0518 22:41:13.631037       1 server.go:244] leaderelection lost\n
May 18 22:41:32.457 E ns/openshift-multus pod/multus-vtg5v node/ip-10-0-189-73.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
May 18 22:41:32.802 E ns/openshift-sdn pod/sdn-mdgrp node/ip-10-0-251-4.us-west-1.compute.internal container/sdn container exited with code 255 (Error): .go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0518 22:41:02.075724   84529 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0518 22:41:22.123846   84529 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-controller-manager/kube-controller-manager:https to [10.0.150.58:10257 10.0.251.4:10257]\nI0518 22:41:22.123988   84529 roundrobin.go:217] Delete endpoint 10.0.169.128:10257 for service "openshift-kube-controller-manager/kube-controller-manager:https"\nI0518 22:41:22.193706   84529 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.150.58:10259 10.0.251.4:10259]\nI0518 22:41:22.193751   84529 roundrobin.go:217] Delete endpoint 10.0.169.128:10259 for service "openshift-kube-scheduler/scheduler:https"\nI0518 22:41:22.564404   84529 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:41:22.565468   84529 proxier.go:349] userspace syncProxyRules took 100.049265ms\nI0518 22:41:22.995402   84529 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:41:22.996222   84529 proxier.go:349] userspace syncProxyRules took 92.220172ms\nI0518 22:41:25.999163   84529 healthcheck.go:92] SDN healthcheck unable to reconnect to OVS server: dial unix /var/run/openvswitch/db.sock: connect: connection refused\nI0518 22:41:31.505375   84529 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nE0518 22:41:31.505411   84529 pod.go:233] Error updating OVS multicast flows for VNID 7186714: exit status 1\nI0518 22:41:31.509427   84529 ovs.go:169] Error executing ovs-vsctl: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)\nI0518 22:41:31.512422   84529 pod.go:540] CNI_DEL openshift-multus/multus-admission-controller-fnc4t\nF0518 22:41:32.479159   84529 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
May 18 22:41:55.465 E ns/openshift-sdn pod/sdn-4vgsl node/ip-10-0-169-128.us-west-1.compute.internal container/sdn container exited with code 255 (Error): 12 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-apiserver/apiserver:https to [10.0.150.58:6443 10.0.169.128:6443 10.0.251.4:6443]\nI0518 22:41:44.288239   88912 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:41:44.288965   88912 proxier.go:349] userspace syncProxyRules took 34.551006ms\nI0518 22:41:47.813122   88912 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.72:6443 10.129.0.4:6443 10.130.0.71:6443]\nI0518 22:41:47.813168   88912 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.72:8443 10.129.0.4:8443 10.130.0.71:8443]\nI0518 22:41:47.834025   88912 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.72:6443 10.130.0.71:6443]\nI0518 22:41:47.834059   88912 roundrobin.go:217] Delete endpoint 10.129.0.4:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0518 22:41:47.834073   88912 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.72:8443 10.130.0.71:8443]\nI0518 22:41:47.834080   88912 roundrobin.go:217] Delete endpoint 10.129.0.4:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0518 22:41:48.016208   88912 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:41:48.016750   88912 proxier.go:349] userspace syncProxyRules took 39.840099ms\nI0518 22:41:48.177829   88912 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:41:48.178578   88912 proxier.go:349] userspace syncProxyRules took 35.908479ms\nI0518 22:41:54.719928   88912 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0518 22:41:54.719973   88912 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
May 18 22:42:14.766 E ns/openshift-sdn pod/sdn-vg4ch node/ip-10-0-150-58.us-west-1.compute.internal container/sdn container exited with code 255 (Error):  roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.72:6443 10.129.0.4:6443 10.130.0.71:6443]\nI0518 22:41:47.815160   87391 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.72:8443 10.129.0.4:8443 10.130.0.71:8443]\nI0518 22:41:47.835963   87391 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.72:6443 10.130.0.71:6443]\nI0518 22:41:47.836003   87391 roundrobin.go:217] Delete endpoint 10.129.0.4:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0518 22:41:47.836022   87391 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.72:8443 10.130.0.71:8443]\nI0518 22:41:47.836034   87391 roundrobin.go:217] Delete endpoint 10.129.0.4:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0518 22:41:48.034867   87391 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:41:48.035619   87391 proxier.go:349] userspace syncProxyRules took 38.978046ms\nI0518 22:41:48.218362   87391 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:41:48.219447   87391 proxier.go:349] userspace syncProxyRules took 37.33874ms\nI0518 22:42:07.248291   87391 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.150.58:10259 10.0.169.128:10259 10.0.251.4:10259]\nI0518 22:42:07.423838   87391 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:42:07.424806   87391 proxier.go:349] userspace syncProxyRules took 33.105409ms\nI0518 22:42:13.839897   87391 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0518 22:42:13.839945   87391 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
May 18 22:42:18.716 E ns/openshift-multus pod/multus-admission-controller-m7qmn node/ip-10-0-150-58.us-west-1.compute.internal container/multus-admission-controller container exited with code 137 (Error): 
May 18 22:42:43.451 E ns/openshift-sdn pod/sdn-4r8sb node/ip-10-0-154-99.us-west-1.compute.internal container/sdn container exited with code 255 (Error): \nI0518 22:41:48.554051   60239 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:41:48.554757   60239 proxier.go:349] userspace syncProxyRules took 86.44333ms\nI0518 22:41:48.616358   60239 proxier.go:1656] Opened local port "nodePort for openshift-ingress/router-default:http" (:31435/tcp)\nI0518 22:41:48.616691   60239 proxier.go:1656] Opened local port "nodePort for e2e-k8s-service-lb-available-9382/service-test:" (:30829/tcp)\nI0518 22:41:48.616935   60239 proxier.go:1656] Opened local port "nodePort for openshift-ingress/router-default:https" (:32533/tcp)\nI0518 22:41:48.659108   60239 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30112\nI0518 22:41:48.670122   60239 proxy.go:311] openshift-sdn proxy services and endpoints initialized\nI0518 22:41:48.670157   60239 cmd.go:172] openshift-sdn network plugin registering startup\nI0518 22:41:48.670287   60239 cmd.go:176] openshift-sdn network plugin ready\nI0518 22:42:07.246741   60239 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-kube-scheduler/scheduler:https to [10.0.150.58:10259 10.0.169.128:10259 10.0.251.4:10259]\nI0518 22:42:07.398481   60239 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:42:07.399115   60239 proxier.go:349] userspace syncProxyRules took 37.184068ms\nI0518 22:42:32.832926   60239 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.72:6443 10.129.0.70:6443 10.130.0.71:6443]\nI0518 22:42:32.832963   60239 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.72:8443 10.129.0.70:8443 10.130.0.71:8443]\nI0518 22:42:32.997849   60239 proxier.go:370] userspace proxy: processing 0 service events\nI0518 22:42:32.998619   60239 proxier.go:349] userspace syncProxyRules took 43.28325ms\nF0518 22:42:43.086680   60239 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
May 18 22:43:25.576 E ns/openshift-multus pod/multus-8hh8z node/ip-10-0-154-99.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
May 18 22:44:14.190 E ns/openshift-multus pod/multus-hzzt5 node/ip-10-0-233-6.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
May 18 22:45:10.223 E ns/openshift-multus pod/multus-kclqj node/ip-10-0-169-128.us-west-1.compute.internal container/kube-multus container exited with code 137 (Error): 
May 18 22:48:21.837 E ns/openshift-machine-config-operator pod/machine-config-daemon-7zcsl node/ip-10-0-233-6.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
May 18 22:48:36.435 E ns/openshift-machine-config-operator pod/machine-config-daemon-p45dg node/ip-10-0-251-4.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
May 18 22:48:45.992 E ns/openshift-machine-config-operator pod/machine-config-daemon-wv2qj node/ip-10-0-169-128.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
May 18 22:48:57.479 E ns/openshift-machine-config-operator pod/machine-config-daemon-z4q2f node/ip-10-0-189-73.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
May 18 22:49:03.295 E ns/openshift-machine-config-operator pod/machine-config-daemon-6ml8c node/ip-10-0-154-99.us-west-1.compute.internal container/oauth-proxy container exited with code 143 (Error): 
May 18 22:49:13.579 E ns/openshift-machine-config-operator pod/machine-config-controller-59bb856b68-drcsp node/ip-10-0-251-4.us-west-1.compute.internal container/machine-config-controller container exited with code 2 (Error): retrieving resource lock openshift-machine-config-operator/machine-config-controller: Get https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config-controller: unexpected EOF\nI0518 22:23:51.094907       1 node_controller.go:453] Pool worker: node ip-10-0-189-73.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-10065e57801575f9ab861134136162b3\nI0518 22:23:51.095366       1 node_controller.go:453] Pool worker: node ip-10-0-189-73.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-10065e57801575f9ab861134136162b3\nI0518 22:23:51.095435       1 node_controller.go:453] Pool worker: node ip-10-0-189-73.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0518 22:24:12.306893       1 node_controller.go:453] Pool worker: node ip-10-0-154-99.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-10065e57801575f9ab861134136162b3\nI0518 22:24:12.306921       1 node_controller.go:453] Pool worker: node ip-10-0-154-99.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-10065e57801575f9ab861134136162b3\nI0518 22:24:12.306934       1 node_controller.go:453] Pool worker: node ip-10-0-154-99.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\nI0518 22:24:18.354023       1 node_controller.go:453] Pool worker: node ip-10-0-233-6.us-west-1.compute.internal changed machineconfiguration.openshift.io/currentConfig = rendered-worker-10065e57801575f9ab861134136162b3\nI0518 22:24:18.354148       1 node_controller.go:453] Pool worker: node ip-10-0-233-6.us-west-1.compute.internal changed machineconfiguration.openshift.io/desiredConfig = rendered-worker-10065e57801575f9ab861134136162b3\nI0518 22:24:18.354192       1 node_controller.go:453] Pool worker: node ip-10-0-233-6.us-west-1.compute.internal changed machineconfiguration.openshift.io/state = Done\n
May 18 22:51:12.014 E ns/openshift-machine-config-operator pod/machine-config-server-5p6zc node/ip-10-0-251-4.us-west-1.compute.internal container/machine-config-server container exited with code 2 (Error): I0518 22:15:38.883771       1 start.go:38] Version: machine-config-daemon-4.5.0-202005142357-4-g461e3688-dirty (461e36888f63ee7f592207f022dbca9248b9f984)\nI0518 22:15:38.884723       1 api.go:56] Launching server on :22624\nI0518 22:15:38.884779       1 api.go:56] Launching server on :22623\n
May 18 22:51:30.886 E ns/openshift-machine-config-operator pod/machine-config-operator-8446c5b55b-hs8zk node/ip-10-0-169-128.us-west-1.compute.internal container/machine-config-operator container exited with code 2 (Error): :"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"bd2e029c-f481-4806-ba94-5cb7c121de61", ResourceVersion:"36235", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725436878, loc:(*time.Location)(0x25203c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-8446c5b55b-hs8zk_d2cef9eb-c022-40a5-b500-efa5cde8de8b\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-05-18T22:48:11Z\",\"renewTime\":\"2020-05-18T22:48:11Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"machine-config-operator", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000361ce0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000361d00)}}}, Immutable:(*bool)(nil), Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-8446c5b55b-hs8zk_d2cef9eb-c022-40a5-b500-efa5cde8de8b became leader'\nI0518 22:48:11.200671       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0518 22:48:11.748050       1 operator.go:265] Starting MachineConfigOperator\nI0518 22:48:11.753989       1 event.go:278] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"7df96e47-9251-4e5b-9dd0-09e7b9e6b913", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-05-18-220004}] to [{operator 0.0.1-2020-05-18-220215}]\n
May 18 22:51:49.936 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-189-73.us-west-1.compute.internal container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-18T22:51:43.478Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-18T22:51:43.482Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-18T22:51:43.483Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-18T22:51:43.485Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-18T22:51:43.485Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-18T22:51:43.487Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-18T22:51:43.487Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-18
May 18 22:51:49.969 E ns/openshift-console pod/console-b98fc97f7-xwrht node/ip-10-0-169-128.us-west-1.compute.internal container/console container exited with code 2 (Error): 2020-05-18T22:38:55Z cmd/main: cookies are secure!\n2020-05-18T22:38:55Z cmd/main: Binding to [::]:8443...\n2020-05-18T22:38:55Z cmd/main: using TLS\n
May 18 22:51:52.773 E ns/e2e-k8s-sig-apps-job-upgrade-4672 pod/foo-8t74w node/ip-10-0-154-99.us-west-1.compute.internal container/c container exited with code 137 (Error): 
May 18 22:53:23.577 E ns/openshift-machine-config-operator pod/machine-config-daemon-r6qmh node/ip-10-0-169-128.us-west-1.compute.internal container/oauth-proxy container exited with code 1 (Error): 
May 18 22:53:26.353 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
May 18 22:53:39.022 E ns/openshift-insights pod/insights-operator-5667b4d5f5-x86h8 node/ip-10-0-251-4.us-west-1.compute.internal container/operator container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-insights_insights-operator-5667b4d5f5-x86h8_ed591a88-b372-4d8c-8a27-ca7a61d555c7/operator/0.log": lstat /var/log/pods/openshift-insights_insights-operator-5667b4d5f5-x86h8_ed591a88-b372-4d8c-8a27-ca7a61d555c7/operator/0.log: no such file or directory
May 18 22:53:55.777 E ns/openshift-machine-config-operator pod/machine-config-daemon-4kq8s node/ip-10-0-154-99.us-west-1.compute.internal container/oauth-proxy container exited with code 1 (Error): 
May 18 22:53:58.447 E clusteroperator/etcd changed Degraded to True: EtcdMembers_UnhealthyMembers: EtcdMembersDegraded: 2 of 3 members are available, ip-10-0-251-4.us-west-1.compute.internal is unhealthy
May 18 22:54:01.502 E ns/openshift-console pod/console-b98fc97f7-vp5qd node/ip-10-0-251-4.us-west-1.compute.internal container/console container exited with code 2 (Error): 2020-05-18T22:51:31Z cmd/main: cookies are secure!\n2020-05-18T22:51:31Z cmd/main: Binding to [::]:8443...\n2020-05-18T22:51:31Z cmd/main: using TLS\n
May 18 22:54:14.450 E kube-apiserver Kube API started failing: Get https://api.ci-op-d2zn5gn9-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: unexpected EOF
May 18 22:54:14.627 E kube-apiserver failed contacting the API: Get https://api.ci-op-d2zn5gn9-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=41146&timeout=7m43s&timeoutSeconds=463&watch=true: dial tcp 13.57.126.81:6443: connect: connection refused
May 18 22:55:05.433 E ns/openshift-marketplace pod/redhat-operators-7465689674-gckg5 node/ip-10-0-189-73.us-west-1.compute.internal container/redhat-operators container exited with code 2 (Error): 
May 18 22:55:34.486 E ns/openshift-marketplace pod/certified-operators-767df4985b-b6pv5 node/ip-10-0-189-73.us-west-1.compute.internal container/certified-operators container exited with code 2 (Error): 
May 18 22:55:38.883 E ns/openshift-machine-config-operator pod/machine-config-daemon-llw6c node/ip-10-0-251-4.us-west-1.compute.internal container/oauth-proxy container exited with code 1 (Error): 
May 18 22:55:56.550 E ns/openshift-marketplace pod/community-operators-b8457bcc-bjgdz node/ip-10-0-189-73.us-west-1.compute.internal container/community-operators container exited with code 2 (Error): 
May 18 22:55:58.823 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-189-73.us-west-1.compute.internal container/prometheus container exited with code 1 (Error): caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-18T22:51:43.478Z caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-18T22:51:43.482Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-18T22:51:43.483Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-18T22:51:43.484Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-18T22:51:43.485Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-18T22:51:43.485Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-18T22:51:43.487Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-18T22:51:43.487Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-18
May 18 22:55:58.823 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-189-73.us-west-1.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/05/18 22:51:48 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
May 18 22:55:58.823 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-189-73.us-west-1.compute.internal container/prometheus-proxy container exited with code 2 (Error): Error on reading termination message from logs: failed to try resolving symlinks in path "/var/log/pods/openshift-monitoring_prometheus-k8s-0_2c26b241-a065-4ff7-a24b-18ea6eb674a1/prometheus-proxy/0.log": lstat /var/log/pods/openshift-monitoring_prometheus-k8s-0_2c26b241-a065-4ff7-a24b-18ea6eb674a1/prometheus-proxy/0.log: no such file or directory
May 18 22:55:58.823 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-189-73.us-west-1.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-05-18T22:51:48.743840825Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-05-18T22:51:48.74620573Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-05-18T22:51:53.948919806Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-18T22:51:53.949057207Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
May 18 22:55:59.960 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-58cf7886bd-7j9cg node/ip-10-0-189-73.us-west-1.compute.internal container/snapshot-controller container exited with code 2 (Error): 
May 18 22:56:00.189 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-189-73.us-west-1.compute.internal container/config-reloader container exited with code 2 (Error): 2020/05/18 22:51:38 Watching directory: "/etc/alertmanager/config"\n
May 18 22:56:00.189 E ns/openshift-monitoring pod/alertmanager-main-2 node/ip-10-0-189-73.us-west-1.compute.internal container/alertmanager-proxy container exited with code 2 (Error): 2020/05/18 22:51:38 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/18 22:51:38 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/18 22:51:38 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/18 22:51:38 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/05/18 22:51:38 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/18 22:51:38 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/05/18 22:51:38 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/18 22:51:38 http.go:107: HTTPS: listening on [::]:9095\nI0518 22:51:38.381543       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
May 18 22:56:02.125 E ns/openshift-cluster-machine-approver pod/machine-approver-64689f498-xr9lr node/ip-10-0-150-58.us-west-1.compute.internal container/machine-approver-controller container exited with code 2 (Error): ] Starting reflector *v1beta1.CertificateSigningRequest (0s) from github.com/openshift/cluster-machine-approver/main.go:239\nI0518 22:37:02.810949       1 status.go:96] Starting cluster operator status controller\nI0518 22:37:02.811212       1 reflector.go:175] Starting reflector *v1.ClusterOperator (0s) from github.com/openshift/cluster-machine-approver/status.go:98\nI0518 22:37:02.909603       1 main.go:147] CSR csr-p67k8 added\nI0518 22:37:02.909661       1 main.go:150] CSR csr-p67k8 is already approved\nI0518 22:37:02.909682       1 main.go:147] CSR csr-p6k24 added\nI0518 22:37:02.909693       1 main.go:150] CSR csr-p6k24 is already approved\nI0518 22:37:02.909707       1 main.go:147] CSR csr-r6jkr added\nI0518 22:37:02.909717       1 main.go:150] CSR csr-r6jkr is already approved\nI0518 22:37:02.909732       1 main.go:147] CSR csr-rskmc added\nI0518 22:37:02.909742       1 main.go:150] CSR csr-rskmc is already approved\nI0518 22:37:02.909755       1 main.go:147] CSR csr-57l5c added\nI0518 22:37:02.909766       1 main.go:150] CSR csr-57l5c is already approved\nI0518 22:37:02.909779       1 main.go:147] CSR csr-9wp7j added\nI0518 22:37:02.909788       1 main.go:150] CSR csr-9wp7j is already approved\nI0518 22:37:02.909803       1 main.go:147] CSR csr-cqwgh added\nI0518 22:37:02.909813       1 main.go:150] CSR csr-cqwgh is already approved\nI0518 22:37:02.909826       1 main.go:147] CSR csr-h6hjf added\nI0518 22:37:02.909836       1 main.go:150] CSR csr-h6hjf is already approved\nI0518 22:37:02.909851       1 main.go:147] CSR csr-vb4l7 added\nI0518 22:37:02.909862       1 main.go:150] CSR csr-vb4l7 is already approved\nI0518 22:37:02.909875       1 main.go:147] CSR csr-wbjgh added\nI0518 22:37:02.909885       1 main.go:150] CSR csr-wbjgh is already approved\nI0518 22:37:02.909898       1 main.go:147] CSR csr-mgwdq added\nI0518 22:37:02.909907       1 main.go:150] CSR csr-mgwdq is already approved\nI0518 22:37:02.909920       1 main.go:147] CSR csr-slggw added\nI0518 22:37:02.909930       1 main.go:150] CSR csr-slggw is already approved\n
May 18 22:56:09.434 E ns/openshift-machine-api pod/machine-api-controllers-69b88f6bcc-p4v8d node/ip-10-0-150-58.us-west-1.compute.internal container/machineset-controller container exited with code 1 (Error): 
May 18 22:56:18.355 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-154-99.us-west-1.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-18T22:56:10.430Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-18T22:56:10.435Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-18T22:56:10.435Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-18T22:56:10.436Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-18T22:56:10.436Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-18T22:56:10.437Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-18T22:56:10.437Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-18T22:56:10.437Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-18T22:56:10.437Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-18T22:56:10.437Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-18T22:56:10.437Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-18T22:56:10.437Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-18T22:56:10.437Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-18T22:56:10.437Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-18T22:56:10.439Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-18T22:56:10.439Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-18
May 18 22:56:26.554 E ns/openshift-console pod/console-b98fc97f7-gk875 node/ip-10-0-150-58.us-west-1.compute.internal container/console container exited with code 2 (Error): 2020-05-18T22:38:26Z cmd/main: cookies are secure!\n2020-05-18T22:38:26Z cmd/main: Binding to [::]:8443...\n2020-05-18T22:38:26Z cmd/main: using TLS\n
May 18 22:56:44.153 E kube-apiserver Kube API started failing: Get https://api.ci-op-d2zn5gn9-f83f1.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/kube-system?timeout=5s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
May 18 22:58:23.495 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
May 18 22:58:29.264 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
May 18 22:58:31.999 E ns/openshift-machine-config-operator pod/machine-config-daemon-86s4k node/ip-10-0-189-73.us-west-1.compute.internal container/oauth-proxy container exited with code 1 (Error): 
May 18 22:58:42.712 E ns/openshift-marketplace pod/community-operators-6bc977dff8-pdnns node/ip-10-0-233-6.us-west-1.compute.internal container/community-operators container exited with code 2 (Error): 
May 18 22:58:42.762 E ns/openshift-marketplace pod/redhat-marketplace-567fdd7bf7-hv7p8 node/ip-10-0-233-6.us-west-1.compute.internal container/redhat-marketplace container exited with code 2 (Error): 
May 18 22:58:43.863 E ns/openshift-monitoring pod/openshift-state-metrics-7689cf497d-4rk9h node/ip-10-0-233-6.us-west-1.compute.internal container/openshift-state-metrics container exited with code 2 (Error): 
May 18 22:58:44.182 E ns/openshift-monitoring pod/thanos-querier-7bd9c84db5-7dfrc node/ip-10-0-233-6.us-west-1.compute.internal container/oauth-proxy container exited with code 2 (Error): vider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/18 22:51:25 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/05/18 22:51:25 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/18 22:51:25 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/05/18 22:51:25 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/18 22:51:25 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/05/18 22:51:25 http.go:107: HTTPS: listening on [::]:9091\nI0518 22:51:25.108051       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/18 22:51:56 oauthproxy.go:774: basicauth: 10.129.0.43:54536 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:52:56 oauthproxy.go:774: basicauth: 10.129.0.43:39014 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:55:57 oauthproxy.go:774: basicauth: 10.129.0.43:38174 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:56:14 oauthproxy.go:774: basicauth: 10.130.0.18:48254 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:57:43 oauthproxy.go:774: basicauth: 10.130.0.18:53872 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:57:43 oauthproxy.go:774: basicauth: 10.130.0.18:53872 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:58:14 oauthproxy.go:774: basicauth: 10.130.0.18:55344 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:58:14 oauthproxy.go:774: basicauth: 10.130.0.18:55344 Authorization header does not start with 'Basic', skipping basic authentication\n
May 18 22:58:44.310 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-6.us-west-1.compute.internal container/rules-configmap-reloader container exited with code 2 (Error): 2020/05/18 22:37:55 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
May 18 22:58:44.310 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-6.us-west-1.compute.internal container/prometheus-proxy container exited with code 2 (Error): 2020/05/18 22:37:58 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/05/18 22:37:58 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/05/18 22:37:58 provider.go:312: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/05/18 22:37:58 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/05/18 22:37:58 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/05/18 22:37:58 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/05/18 22:37:58 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/05/18 22:37:58 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/05/18 22:37:58 http.go:107: HTTPS: listening on [::]:9091\nI0518 22:37:58.616669       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/05/18 22:51:37 oauthproxy.go:774: basicauth: 10.130.0.75:33248 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:53:58 oauthproxy.go:774: basicauth: 10.128.0.20:43234 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:55:41 oauthproxy.go:774: basicauth: 10.128.2.22:60958 Authorization header does not start with 'Basic', skipping basic authentication\n2020/05/18 22:56:06 oauthproxy.go:774: basicauth: 10.131.0.11:50812 Authorization header does not start with 'Basic', skipping basic authentication\n
May 18 22:58:44.310 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-233-6.us-west-1.compute.internal container/prometheus-config-reloader container exited with code 2 (Error): ts=2020-05-18T22:37:53.600070735Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=error ts=2020-05-18T22:37:53.602236232Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=error ts=2020-05-18T22:37:58.607994725Z caller=runutil.go:98 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-05-18T22:38:03.760625922Z caller=reloader.go:289 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\nlevel=info ts=2020-05-18T22:38:03.760732877Z caller=reloader.go:157 msg="started watching config file and non-recursively rule dirs for changes" cfg=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml dirs=\n
May 18 22:58:57.277 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-189-73.us-west-1.compute.internal container/prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-05-18T22:58:55.047Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-05-18T22:58:55.057Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-05-18T22:58:55.058Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-05-18T22:58:55.058Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-05-18T22:58:55.058Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-05-18T22:58:55.058Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-05-18T22:58:55.059Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-05-18T22:58:55.059Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-05-18T22:58:55.059Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-05-18T22:58:55.059Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-05-18T22:58:55.059Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-05-18T22:58:55.059Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-05-18T22:58:55.059Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-05-18T22:58:55.059Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-05-18T22:58:55.060Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-05-18T22:58:55.060Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-05-18
May 18 22:59:26.142 E ns/e2e-k8s-service-lb-available-9382 pod/service-test-4fbx2 node/ip-10-0-233-6.us-west-1.compute.internal container/netexec container exited with code 2 (Error): 
May 18 23:01:13.092 E ns/openshift-machine-config-operator pod/machine-config-daemon-49w6q node/ip-10-0-233-6.us-west-1.compute.internal container/oauth-proxy container exited with code 1 (Error):