ResultSUCCESS
Tests 1 failed / 24 succeeded
Started2020-06-15 21:29
Elapsed2h29m
Work namespaceci-op-04gnlsvw
Refs release-4.4:de1a3573
179:be33c7a0
pod3fc19cd1-af4f-11ea-acfe-0a580a8004de
repoopenshift/cluster-dns-operator
revision1

Test Failures


openshift-tests Monitor cluster while tests execute 44m35s

go run hack/e2e.go -v -test --test_args='--ginkgo.focus=openshift\-tests\sMonitor\scluster\swhile\stests\sexecute$'
170 error level events were detected during this test run:

Jun 15 23:02:53.236 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-185-148.ec2.internal node/ip-10-0-185-148.ec2.internal container=kube-scheduler container exited with code 255 (Error): n refused\nE0615 23:02:52.696145       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=21534&timeout=5m49s&timeoutSeconds=349&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:02:52.699545       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=17756&timeout=5m3s&timeoutSeconds=303&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:02:52.700638       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=20433&timeout=9m59s&timeoutSeconds=599&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:02:52.701673       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=17586&timeout=9m13s&timeoutSeconds=553&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:02:52.703121       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=22381&timeout=7m42s&timeoutSeconds=462&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0615 23:02:52.713555       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0615 23:02:52.713592       1 server.go:257] leaderelection lost\n
Jun 15 23:07:03.179 E ns/openshift-kube-apiserver-operator pod/kube-apiserver-operator-7f4fc5f6c9-vwzf6 node/ip-10-0-161-67.ec2.internal container=kube-apiserver-operator container exited with code 255 (Error): 17.879636       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"da51e54e-720a-416d-bf87-c783cf187ee0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeCurrentRevisionChanged' Updated node "ip-10-0-185-148.ec2.internal" from revision 7 to 9 because static pod is ready\nI0615 23:03:17.904844       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"da51e54e-720a-416d-bf87-c783cf187ee0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 9"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 7; 2 nodes are at revision 9" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9"\nI0615 23:03:20.077480       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"da51e54e-720a-416d-bf87-c783cf187ee0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/revision-status-9 -n openshift-kube-apiserver:\ncause by changes in data.status\nI0615 23:03:29.074185       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"da51e54e-720a-416d-bf87-c783cf187ee0", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'PodCreated' Created Pod/revision-pruner-9-ip-10-0-185-148.ec2.internal -n openshift-kube-apiserver because it was missing\nI0615 23:07:02.415529       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nF0615 23:07:02.416188       1 builder.go:209] server exited\n
Jun 15 23:07:17.238 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-5b9fd457c4-pbk8p node/ip-10-0-161-67.ec2.internal container=kube-controller-manager-operator container exited with code 255 (Error): arting failed container=cluster-policy-controller pod=kube-controller-manager-ip-10-0-161-67.ec2.internal_openshift-kube-controller-manager(5fb0281c0d2f0708ca0b7fb56f0b3938)\"\nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-161-67.ec2.internal pods/kube-controller-manager-ip-10-0-161-67.ec2.internal container=\"cluster-policy-controller\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0615 23:01:59.377785       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"4f4c9250-b824-43de-8c71-06fd643c3aea", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-161-67.ec2.internal pods/kube-controller-manager-ip-10-0-161-67.ec2.internal container=\"cluster-policy-controller\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nI0615 23:07:16.651684       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0615 23:07:16.652116       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0615 23:07:16.652151       1 builder.go:209] server exited\nI0615 23:07:16.674212       1 satokensigner_controller.go:332] Shutting down SATokenSignerController\nI0615 23:07:16.674249       1 base_controller.go:74] Shutting down NodeController ...\nI0615 23:07:16.674267       1 base_controller.go:74] Shutting down  ...\nI0615 23:07:16.674281       1 status_controller.go:212] Shutting down StatusSyncer-kube-controller-manager\nI0615 23:07:16.674297       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0615 23:07:16.668499       1 config_observer_controller.go:160] Shutting down ConfigObserver\nF0615 23:07:16.674636       1 leaderelection.go:67] leaderelection lost\n
Jun 15 23:07:22.264 E ns/openshift-kube-scheduler-operator pod/openshift-kube-scheduler-operator-545f7c475c-pwkvq node/ip-10-0-161-67.ec2.internal container=kube-scheduler-operator-container container exited with code 255 (Error): 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-185-148.ec2.internal pods/openshift-kube-scheduler-ip-10-0-185-148.ec2.internal container=\"kube-scheduler\" is not ready"\nI0615 23:03:48.156117       1 status_controller.go:176] clusteroperator/kube-scheduler diff {"status":{"conditions":[{"lastTransitionTime":"2020-06-15T22:53:09Z","message":"NodeControllerDegraded: All master nodes are ready","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-06-15T22:58:16Z","message":"NodeInstallerProgressing: 3 nodes are at revision 7","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2020-06-15T22:48:33Z","message":"StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-06-15T22:46:25Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0615 23:03:48.164553       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-scheduler-operator", Name:"openshift-kube-scheduler-operator", UID:"befa0d4a-9d06-4a4b-88b6-e26d9379f0f8", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: nodes/ip-10-0-185-148.ec2.internal pods/openshift-kube-scheduler-ip-10-0-185-148.ec2.internal container=\"kube-scheduler\" is not ready" to "NodeControllerDegraded: All master nodes are ready"\nI0615 23:07:21.217222       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0615 23:07:21.217304       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nF0615 23:07:21.217321       1 builder.go:209] server exited\n
Jun 15 23:07:58.530 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-161-67.ec2.internal node/ip-10-0-161-67.ec2.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0615 23:07:56.865842       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0615 23:07:56.865852       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0615 23:07:56.865862       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0615 23:07:56.865872       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0615 23:07:56.865881       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0615 23:07:56.865911       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0615 23:07:56.865922       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0615 23:07:56.865935       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0615 23:07:56.865943       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0615 23:07:56.865953       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0615 23:07:56.865969       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0615 23:07:56.865981       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0615 23:07:56.865993       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0615 23:07:56.866004       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0615 23:07:56.866042       1 server.go:627] external host was not specified, using 10.0.161.67\nI0615 23:07:56.866212       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0615 23:07:56.866471       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 15 23:07:58.546 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-161-67.ec2.internal node/ip-10-0-161-67.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0615 23:07:57.784033       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0615 23:07:57.787059       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0615 23:07:57.790355       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0615 23:07:57.790475       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0615 23:07:57.792515       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 15 23:08:20.717 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-161-67.ec2.internal node/ip-10-0-161-67.ec2.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0615 23:08:20.176646       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0615 23:08:20.176655       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0615 23:08:20.176663       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0615 23:08:20.176671       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0615 23:08:20.176680       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0615 23:08:20.176688       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0615 23:08:20.176697       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0615 23:08:20.176705       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0615 23:08:20.176714       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0615 23:08:20.176723       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0615 23:08:20.176737       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0615 23:08:20.176750       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0615 23:08:20.176760       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0615 23:08:20.176771       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0615 23:08:20.176805       1 server.go:627] external host was not specified, using 10.0.161.67\nI0615 23:08:20.177036       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0615 23:08:20.177416       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 15 23:08:50.825 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-161-67.ec2.internal node/ip-10-0-161-67.ec2.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0615 23:08:50.091758       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0615 23:08:50.091767       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0615 23:08:50.091773       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0615 23:08:50.091779       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0615 23:08:50.091784       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0615 23:08:50.091789       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0615 23:08:50.091795       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0615 23:08:50.091800       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0615 23:08:50.091805       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0615 23:08:50.091811       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0615 23:08:50.091820       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0615 23:08:50.091826       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0615 23:08:50.091833       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0615 23:08:50.091839       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0615 23:08:50.091862       1 server.go:627] external host was not specified, using 10.0.161.67\nI0615 23:08:50.092039       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0615 23:08:50.092307       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 15 23:09:09.692 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-185-148.ec2.internal node/ip-10-0-185-148.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0615 23:09:08.279835       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0615 23:09:08.281454       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0615 23:09:08.283661       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0615 23:09:08.284254       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 15 23:09:13.910 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-161-67.ec2.internal node/ip-10-0-161-67.ec2.internal container=kube-scheduler container exited with code 255 (Error): timeoutSeconds=553&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:09:13.240157       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=24809&timeout=6m51s&timeoutSeconds=411&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:09:13.241438       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:6443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=25223&timeout=8m32s&timeoutSeconds=512&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:09:13.242581       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=24809&timeout=6m9s&timeoutSeconds=369&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:09:13.243807       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:6443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=24809&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:09:13.244871       1 reflector.go:307] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: Get https://localhost:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=21183&timeout=9m43s&timeoutSeconds=583&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0615 23:09:13.490323       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0615 23:09:13.490371       1 server.go:257] leaderelection lost\n
Jun 15 23:09:27.770 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-185-148.ec2.internal node/ip-10-0-185-148.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0615 23:09:27.083421       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0615 23:09:27.085429       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0615 23:09:27.087226       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0615 23:09:27.087342       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0615 23:09:27.087978       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 15 23:09:57.526 E ns/openshift-machine-api pod/machine-api-controllers-866c848465-n7mgl node/ip-10-0-232-91.ec2.internal container=controller-manager container exited with code 1 (Error): 
Jun 15 23:10:01.125 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-c6895d98c-9srtk node/ip-10-0-161-67.ec2.internal container=kube-storage-version-migrator-operator container exited with code 255 (Error): e":"Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available"},{"type":"Upgradeable","status":"Unknown","lastTransitionTime":"2020-06-15T22:46:24Z","reason":"NoData"}],"versions":[{"name":"operator","version":"0.0.1-2020-06-15-213103"}\n\nA: ],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nB: ,{"name":"kube-storage-version-migrator","version":""}],"relatedObjects":[{"group":"operator.openshift.io","resource":"kubestorageversionmigrators","name":"cluster"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator"},{"group":"","resource":"namespaces","name":"openshift-kube-storage-version-migrator-operator"}],"extension":null}\n\n\nI0615 22:56:13.151212       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"b9e564db-6c95-4d5e-a28e-6dd4c72dd107", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0615 22:56:13.164368       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"b9e564db-6c95-4d5e-a28e-6dd4c72dd107", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Available changed from False to True ("")\nI0615 23:10:00.003739       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nF0615 23:10:00.003788       1 leaderelection.go:66] leaderelection lost\n
Jun 15 23:10:20.034 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-148.ec2.internal node/ip-10-0-185-148.ec2.internal container=kube-apiserver container exited with code 1 (Error): PRanger"\nI0615 23:10:18.320786       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0615 23:10:18.320827       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0615 23:10:18.320864       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0615 23:10:18.320899       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0615 23:10:18.320934       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0615 23:10:18.321051       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0615 23:10:18.321100       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0615 23:10:18.321168       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0615 23:10:18.321212       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0615 23:10:18.321253       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0615 23:10:18.321303       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0615 23:10:18.321350       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0615 23:10:18.321392       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0615 23:10:18.321428       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0615 23:10:18.321494       1 server.go:627] external host was not specified, using 10.0.185.148\nI0615 23:10:18.322468       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0615 23:10:18.322781       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 15 23:10:25.737 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-232-91.ec2.internal node/ip-10-0-232-91.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0615 23:10:24.583886       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0615 23:10:24.586447       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0615 23:10:24.590926       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nF0615 23:10:24.595322       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 15 23:10:36.184 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-148.ec2.internal node/ip-10-0-185-148.ec2.internal container=kube-apiserver container exited with code 1 (Error): PRanger"\nI0615 23:10:35.230146       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0615 23:10:35.230152       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0615 23:10:35.230158       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0615 23:10:35.230163       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0615 23:10:35.230169       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0615 23:10:35.230174       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0615 23:10:35.230180       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0615 23:10:35.230185       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0615 23:10:35.230191       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0615 23:10:35.230196       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0615 23:10:35.230205       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0615 23:10:35.230212       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0615 23:10:35.230220       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0615 23:10:35.230227       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0615 23:10:35.230251       1 server.go:627] external host was not specified, using 10.0.185.148\nI0615 23:10:35.230559       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0615 23:10:35.230910       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 15 23:10:44.816 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-232-91.ec2.internal node/ip-10-0-232-91.ec2.internal container=cluster-policy-controller container exited with code 255 (Error): I0615 23:10:44.714364       1 cert_rotation.go:137] Starting client certificate rotation controller\nI0615 23:10:44.716640       1 policy_controller.go:41] Starting controllers on 0.0.0.0:10357 (v0.0.0-unknown)\nI0615 23:10:44.718484       1 standalone_apiserver.go:103] Started health checks at 0.0.0.0:10357\nI0615 23:10:44.718544       1 leaderelection.go:242] attempting to acquire leader lease  openshift-kube-controller-manager/cluster-policy-controller...\nF0615 23:10:44.719368       1 standalone_apiserver.go:119] listen tcp 0.0.0.0:10357: bind: address already in use\n
Jun 15 23:10:59.276 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-185-148.ec2.internal node/ip-10-0-185-148.ec2.internal container=kube-apiserver container exited with code 1 (Error): PRanger"\nI0615 23:10:58.200077       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0615 23:10:58.200087       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0615 23:10:58.200095       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0615 23:10:58.200105       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0615 23:10:58.200114       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0615 23:10:58.200123       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0615 23:10:58.200129       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0615 23:10:58.200134       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0615 23:10:58.200139       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0615 23:10:58.200150       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0615 23:10:58.200162       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0615 23:10:58.200172       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0615 23:10:58.200182       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0615 23:10:58.200192       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0615 23:10:58.200227       1 server.go:627] external host was not specified, using 10.0.185.148\nI0615 23:10:58.200420       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0615 23:10:58.200709       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 15 23:11:36.420 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-185-148.ec2.internal node/ip-10-0-185-148.ec2.internal container=kube-controller-manager container exited with code 255 (Error): : dial tcp [::1]:6443: connect: connection refused\nE0615 23:11:35.491434       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://localhost:6443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=22575&timeout=6m31s&timeoutSeconds=391&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:11:35.492549       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/oauths?allowWatchBookmarks=true&resourceVersion=23789&timeout=8m10s&timeoutSeconds=490&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:11:35.495298       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/operator.openshift.io/v1/kubeapiservers?allowWatchBookmarks=true&resourceVersion=26728&timeout=5m46s&timeoutSeconds=346&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:11:35.496464       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/clusteroperators?allowWatchBookmarks=true&resourceVersion=26797&timeout=6m44s&timeoutSeconds=404&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:11:35.497624       1 reflector.go:307] github.com/openshift/client-go/build/informers/externalversions/factory.go:101: Failed to watch *v1.BuildConfig: Get https://localhost:6443/apis/build.openshift.io/v1/buildconfigs?allowWatchBookmarks=true&resourceVersion=26485&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0615 23:11:35.606055       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0615 23:11:35.606266       1 controllermanager.go:291] leaderelection lost\n
Jun 15 23:11:51.458 E ns/openshift-kube-storage-version-migrator pod/migrator-69f7596d44-s8swh node/ip-10-0-170-171.ec2.internal container=migrator container exited with code 2 (Error): I0615 23:00:10.239587       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0615 23:02:41.607506       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF\n
Jun 15 23:11:52.543 E ns/openshift-cluster-machine-approver pod/machine-approver-79cd7dbc74-5dfcn node/ip-10-0-161-67.ec2.internal container=machine-approver-controller container exited with code 2 (Error): sts?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0615 23:09:29.194109       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0615 23:09:30.194848       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0615 23:09:31.195873       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0615 23:09:32.196591       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0615 23:09:33.197334       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\nE0615 23:09:34.198215       1 reflector.go:126] github.com/openshift/cluster-machine-approver/main.go:238: Failed to list *v1beta1.CertificateSigningRequest: Get https://127.0.0.1:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?limit=500&resourceVersion=0: dial tcp 127.0.0.1:6443: connect: connection refused\n
Jun 15 23:12:09.514 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-operator-7b695c8ff4-47dwl node/ip-10-0-170-171.ec2.internal container=operator container exited with code 255 (Error): 000 UTC m=+945.005375162\nI0615 23:11:56.423408       1 operator.go:147] Finished syncing operator at 47.072103ms\nI0615 23:11:56.423454       1 operator.go:145] Starting syncing operator at 2020-06-15 23:11:56.423448436 +0000 UTC m=+945.052496070\nI0615 23:11:56.495968       1 operator.go:147] Finished syncing operator at 72.511988ms\nI0615 23:11:56.498085       1 operator.go:145] Starting syncing operator at 2020-06-15 23:11:56.498079255 +0000 UTC m=+945.127126709\nI0615 23:11:56.723126       1 operator.go:147] Finished syncing operator at 225.040684ms\nI0615 23:12:04.158074       1 operator.go:145] Starting syncing operator at 2020-06-15 23:12:04.158063342 +0000 UTC m=+952.787110782\nI0615 23:12:04.187469       1 operator.go:147] Finished syncing operator at 29.398466ms\nI0615 23:12:04.189106       1 operator.go:145] Starting syncing operator at 2020-06-15 23:12:04.189097839 +0000 UTC m=+952.818145387\nI0615 23:12:04.228114       1 operator.go:147] Finished syncing operator at 39.008145ms\nI0615 23:12:04.228155       1 operator.go:145] Starting syncing operator at 2020-06-15 23:12:04.228148644 +0000 UTC m=+952.857196256\nI0615 23:12:04.271131       1 operator.go:147] Finished syncing operator at 42.97433ms\nI0615 23:12:04.271174       1 operator.go:145] Starting syncing operator at 2020-06-15 23:12:04.271167783 +0000 UTC m=+952.900215376\nI0615 23:12:04.570448       1 operator.go:147] Finished syncing operator at 299.269724ms\nI0615 23:12:08.481785       1 operator.go:145] Starting syncing operator at 2020-06-15 23:12:08.481777402 +0000 UTC m=+957.110824825\nI0615 23:12:08.563669       1 operator.go:147] Finished syncing operator at 81.878691ms\nI0615 23:12:08.563816       1 operator.go:145] Starting syncing operator at 2020-06-15 23:12:08.563808197 +0000 UTC m=+957.192855904\nI0615 23:12:08.573105       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0615 23:12:08.573409       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nF0615 23:12:08.573598       1 builder.go:243] stopped\n
Jun 15 23:12:13.146 E ns/openshift-authentication-operator pod/authentication-operator-86d7859d69-fj5k5 node/ip-10-0-161-67.ec2.internal container=operator container exited with code 255 (Error): ed","status":"False","type":"Degraded"},{"lastTransitionTime":"2020-06-15T23:12:06Z","message":"Progressing: deployment's observed generation did not reach the expected generation","reason":"_OAuthServerDeploymentNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2020-06-15T23:01:53Z","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2020-06-15T22:46:23Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0615 23:12:06.622306       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"c4d59ea9-859a-4fdb-8177-13f71c54fd98", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Progressing changed from False to True ("Progressing: deployment's observed generation did not reach the expected generation")\nI0615 23:12:11.876537       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0615 23:12:11.876908       1 controller.go:70] Shutting down AuthenticationOperator2\nI0615 23:12:11.876934       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0615 23:12:11.876948       1 ingress_state_controller.go:157] Shutting down IngressStateController\nI0615 23:12:11.876962       1 controller.go:215] Shutting down RouterCertsDomainValidationController\nI0615 23:12:11.876975       1 status_controller.go:212] Shutting down StatusSyncer-authentication\nI0615 23:12:11.876988       1 remove_stale_conditions.go:83] Shutting down RemoveStaleConditions\nI0615 23:12:11.877002       1 logging_controller.go:93] Shutting down LogLevelController\nI0615 23:12:11.877014       1 unsupportedconfigoverrides_controller.go:162] Shutting down UnsupportedConfigOverridesController\nI0615 23:12:11.877027       1 management_state_controller.go:112] Shutting down management-state-controller-authentication\nF0615 23:12:11.877122       1 builder.go:210] server exited\n
Jun 15 23:12:20.835 E ns/openshift-service-ca pod/service-ca-85544dd48-ks2hq node/ip-10-0-232-91.ec2.internal container=service-ca-controller container exited with code 255 (Error): 
Jun 15 23:12:29.074 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* deployment openshift-console/downloads is progressing ReplicaSetUpdated: ReplicaSet "downloads-cd8575495" is progressing.\n* deployment openshift-image-registry/cluster-image-registry-operator is progressing ReplicaSetUpdated: ReplicaSet "cluster-image-registry-operator-796ff7d4f5" is progressing.\n* deployment openshift-marketplace/marketplace-operator is progressing ReplicaSetUpdated: ReplicaSet "marketplace-operator-5cfdb4d977" is progressing.\n* deployment openshift-operator-lifecycle-manager/catalog-operator is progressing ReplicaSetUpdated: ReplicaSet "catalog-operator-78c7ddfc76" is progressing.
Jun 15 23:12:31.888 E ns/openshift-monitoring pod/telemeter-client-864687b4b5-crhmm node/ip-10-0-175-155.ec2.internal container=reload container exited with code 2 (Error): 
Jun 15 23:12:31.888 E ns/openshift-monitoring pod/telemeter-client-864687b4b5-crhmm node/ip-10-0-175-155.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Jun 15 23:12:35.006 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-232-91.ec2.internal node/ip-10-0-232-91.ec2.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0615 23:12:33.381098       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0615 23:12:33.381109       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0615 23:12:33.381119       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0615 23:12:33.381130       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0615 23:12:33.381139       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0615 23:12:33.381149       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0615 23:12:33.381160       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0615 23:12:33.381170       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0615 23:12:33.381179       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0615 23:12:33.381193       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0615 23:12:33.381210       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0615 23:12:33.381221       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0615 23:12:33.381233       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0615 23:12:33.381245       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0615 23:12:33.381283       1 server.go:627] external host was not specified, using 10.0.232.91\nI0615 23:12:33.381468       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0615 23:12:33.381745       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 15 23:12:38.710 E ns/openshift-monitoring pod/prometheus-adapter-5b8c8f8bd7-nwxs8 node/ip-10-0-218-23.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0615 22:57:17.972683       1 adapter.go:93] successfully using in-cluster auth\nI0615 22:57:18.346120       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jun 15 23:12:41.833 E ns/openshift-monitoring pod/node-exporter-lr6jd node/ip-10-0-175-155.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:11:53Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:11:58Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:08Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:13Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:23Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:28Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:38Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Jun 15 23:12:43.836 E ns/openshift-monitoring pod/prometheus-adapter-5b8c8f8bd7-fctgd node/ip-10-0-175-155.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0615 22:57:14.897287       1 adapter.go:93] successfully using in-cluster auth\nI0615 22:57:15.797786       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jun 15 23:12:52.204 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-232-91.ec2.internal node/ip-10-0-232-91.ec2.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0615 23:12:51.746978       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0615 23:12:51.746988       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0615 23:12:51.746998       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0615 23:12:51.747007       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0615 23:12:51.747020       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0615 23:12:51.747031       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0615 23:12:51.747042       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0615 23:12:51.747051       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0615 23:12:51.747060       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0615 23:12:51.747071       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0615 23:12:51.747087       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0615 23:12:51.747100       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0615 23:12:51.747112       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0615 23:12:51.747124       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0615 23:12:51.747160       1 server.go:627] external host was not specified, using 10.0.232.91\nI0615 23:12:51.747363       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0615 23:12:51.747630       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 15 23:12:56.973 E ns/openshift-monitoring pod/node-exporter-9slbm node/ip-10-0-218-23.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:02Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:05Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:17Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:20Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:32Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:35Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:50Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Jun 15 23:13:05.103 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-218-23.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-15T23:13:00.172Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-15T23:13:00.175Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-15T23:13:00.176Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-15T23:13:00.177Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-15T23:13:00.177Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-15T23:13:00.177Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-15T23:13:00.177Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-15T23:13:00.177Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-15T23:13:00.177Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-15T23:13:00.177Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-15T23:13:00.177Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-15T23:13:00.177Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-15T23:13:00.177Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-15T23:13:00.177Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-15T23:13:00.178Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-15T23:13:00.178Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-15
Jun 15 23:13:10.263 E ns/openshift-monitoring pod/node-exporter-gfhxv node/ip-10-0-185-148.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:11:59Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:12Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:14Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:27Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:29Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:44Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:59Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Jun 15 23:13:11.864 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-171.ec2.internal container=rules-configmap-reloader container exited with code 2 (Error): 2020/06/15 22:58:42 Watching directory: "/etc/prometheus/rules/prometheus-k8s-rulefiles-0"\n
Jun 15 23:13:11.864 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-171.ec2.internal container=prometheus-proxy container exited with code 2 (Error): 2020/06/15 22:58:43 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/06/15 22:58:43 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/15 22:58:43 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/15 22:58:43 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/15 22:58:43 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/15 22:58:43 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:prometheus-k8s\n2020/06/15 22:58:43 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/15 22:58:43 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\nI0615 22:58:43.401597       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/15 22:58:43 http.go:107: HTTPS: listening on [::]:9091\n2020/06/15 23:02:42 oauthproxy.go:774: basicauth: 10.129.2.9:55768 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:07:13 oauthproxy.go:774: basicauth: 10.129.2.9:57428 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:11:43 oauthproxy.go:774: basicauth: 10.129.2.9:59288 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:12:11 oauthproxy.go:774: basicauth: 10.128.0.67:48114 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 15 23:13:11.864 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-170-171.ec2.internal container=prometheus-config-reloader container exited with code 2 (Error): ts=2020-06-15T22:58:42.620390012Z caller=main.go:85 msg="Starting prometheus-config-reloader version '1.12.16'."\nlevel=info ts=2020-06-15T22:58:42.620501944Z caller=reloader.go:127 msg="started watching config file for changes" in=/etc/prometheus/config/prometheus.yaml.gz out=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=error ts=2020-06-15T22:58:42.622094197Z caller=runutil.go:87 msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post http://localhost:9090/-/reload: dial tcp [::1]:9090: connect: connection refused"\nlevel=info ts=2020-06-15T22:58:47.75934204Z caller=reloader.go:258 msg="Prometheus reload triggered" cfg_in=/etc/prometheus/config/prometheus.yaml.gz cfg_out=/etc/prometheus/config_out/prometheus.env.yaml rule_dirs=\n
Jun 15 23:13:12.921 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-175-155.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/06/15 22:57:31 Watching directory: "/etc/alertmanager/config"\n
Jun 15 23:13:12.921 E ns/openshift-monitoring pod/alertmanager-main-0 node/ip-10-0-175-155.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/06/15 22:57:31 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/15 22:57:31 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/15 22:57:31 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/15 22:57:31 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/06/15 22:57:31 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/15 22:57:31 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/15 22:57:31 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0615 22:57:31.398465       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/15 22:57:31 http.go:107: HTTPS: listening on [::]:9095\n
Jun 15 23:13:15.338 E ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-232-91.ec2.internal node/ip-10-0-232-91.ec2.internal container=kube-apiserver container exited with code 1 (Error): IPRanger"\nI0615 23:13:14.815146       1 plugins.go:84] Registered admission plugin "network.openshift.io/RestrictedEndpointsAdmission"\nI0615 23:13:14.815157       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAPIServer"\nI0615 23:13:14.815167       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateAuthentication"\nI0615 23:13:14.815177       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateFeatureGate"\nI0615 23:13:14.815187       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateConsole"\nI0615 23:13:14.815198       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateImage"\nI0615 23:13:14.815208       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateOAuth"\nI0615 23:13:14.815217       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateProject"\nI0615 23:13:14.815229       1 plugins.go:84] Registered admission plugin "config.openshift.io/DenyDeleteClusterConfiguration"\nI0615 23:13:14.815241       1 plugins.go:84] Registered admission plugin "config.openshift.io/ValidateScheduler"\nI0615 23:13:14.815254       1 plugins.go:84] Registered admission plugin "quota.openshift.io/ValidateClusterResourceQuota"\nI0615 23:13:14.815267       1 plugins.go:84] Registered admission plugin "security.openshift.io/ValidateSecurityContextConstraints"\nI0615 23:13:14.815279       1 plugins.go:84] Registered admission plugin "authorization.openshift.io/ValidateRoleBindingRestriction"\nI0615 23:13:14.815292       1 plugins.go:84] Registered admission plugin "security.openshift.io/DefaultSecurityContextConstraints"\nI0615 23:13:14.815328       1 server.go:627] external host was not specified, using 10.0.232.91\nI0615 23:13:14.815500       1 server.go:670] Initializing cache sizes based on 0MB limit\nI0615 23:13:14.817189       1 server.go:188] Version: v1.17.1\nError: failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use\n
Jun 15 23:13:23.360 E ns/openshift-monitoring pod/node-exporter-v89xl node/ip-10-0-232-91.ec2.internal container=node-exporter container exited with code 143 (Error): or gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:25Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:40Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:12:55Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\ntime="2020-06-15T23:13:10Z" level=error msg="error gathering metrics: [from Gatherer #2] collected metric \"virt_platform\" { label:<name:\"type\" value:\"aws\" > gauge:<value:1 > } was collected before with the same name and label values\n" source="log.go:172"\n
Jun 15 23:13:25.995 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-175-155.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-15T23:13:19.396Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-15T23:13:19.401Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-15T23:13:19.402Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-15T23:13:19.403Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-15T23:13:19.403Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-15T23:13:19.404Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-15T23:13:19.404Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-15
Jun 15 23:13:51.568 E ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-232-91.ec2.internal node/ip-10-0-232-91.ec2.internal container=kube-scheduler container exited with code 255 (Error): alhost:6443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=25223&timeout=9m9s&timeoutSeconds=549&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:13:50.181521       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:6443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30454&timeout=8m35s&timeoutSeconds=515&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:13:50.182701       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:6443/api/v1/services?allowWatchBookmarks=true&resourceVersion=30290&timeout=5m40s&timeoutSeconds=340&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:13:50.183663       1 reflector.go:307] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30684&timeoutSeconds=460&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:13:50.184840       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:6443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30676&timeout=6m9s&timeoutSeconds=369&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:13:50.185886       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:6443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=25223&timeout=7m28s&timeoutSeconds=448&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0615 23:13:51.095970       1 leaderelection.go:288] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nF0615 23:13:51.096011       1 server.go:257] leaderelection lost\n
Jun 15 23:13:51.591 E ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-232-91.ec2.internal node/ip-10-0-232-91.ec2.internal container=kube-controller-manager container exited with code 255 (Error): s/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=25223&timeout=9m23s&timeoutSeconds=563&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:13:50.374952       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/console.openshift.io/v1/consolelinks?allowWatchBookmarks=true&resourceVersion=25223&timeout=5m12s&timeoutSeconds=312&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:13:50.376159       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/config.openshift.io/v1/featuregates?allowWatchBookmarks=true&resourceVersion=25223&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:13:50.378034       1 reflector.go:307] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:6443/apis/machineconfiguration.openshift.io/v1/controllerconfigs?allowWatchBookmarks=true&resourceVersion=25223&timeout=5m14s&timeoutSeconds=314&watch=true: dial tcp [::1]:6443: connect: connection refused\nE0615 23:13:50.379286       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://localhost:6443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=26471&timeout=9m14s&timeoutSeconds=554&watch=true: dial tcp [::1]:6443: connect: connection refused\nI0615 23:13:50.775591       1 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition\nF0615 23:13:50.775711       1 controllermanager.go:291] leaderelection lost\nI0615 23:13:50.835238       1 cleaner.go:89] Shutting down CSR cleaner controller\nI0615 23:13:50.835286       1 cronjob_controller.go:101] Shutting down CronJob Manager\n
Jun 15 23:13:56.020 E ns/openshift-marketplace pod/community-operators-5dcf4d956b-xbqkd node/ip-10-0-170-171.ec2.internal container=community-operators container exited with code 2 (Error): 
Jun 15 23:13:56.146 E ns/openshift-marketplace pod/certified-operators-7c5f5496df-hcqpr node/ip-10-0-175-155.ec2.internal container=certified-operators container exited with code 2 (Error): 
Jun 15 23:17:39.606 E ns/openshift-sdn pod/sdn-controller-vvjd5 node/ip-10-0-161-67.ec2.internal container=sdn-controller container exited with code 2 (Error): 2.0/23")\nI0615 23:02:15.895042       1 vnids.go:115] Allocated netid 438663 for namespace "e2e-openshift-api-available-2577"\nI0615 23:02:15.905192       1 vnids.go:115] Allocated netid 16292167 for namespace "e2e-k8s-sig-apps-job-upgrade-9435"\nI0615 23:02:15.918472       1 vnids.go:115] Allocated netid 8135264 for namespace "e2e-k8s-sig-apps-deployment-upgrade-7056"\nI0615 23:02:15.929576       1 vnids.go:115] Allocated netid 14894809 for namespace "e2e-k8s-sig-apps-daemonset-upgrade-6149"\nI0615 23:02:15.936925       1 vnids.go:115] Allocated netid 9878281 for namespace "e2e-kubernetes-api-available-2237"\nI0615 23:02:15.944815       1 vnids.go:115] Allocated netid 6091060 for namespace "e2e-k8s-sig-storage-sig-api-machinery-configmap-upgrade-8093"\nI0615 23:02:15.975815       1 vnids.go:115] Allocated netid 14478696 for namespace "e2e-k8s-sig-apps-replicaset-upgrade-879"\nI0615 23:02:15.987807       1 vnids.go:115] Allocated netid 13884869 for namespace "e2e-k8s-sig-storage-sig-api-machinery-secret-upgrade-6346"\nI0615 23:02:15.996495       1 vnids.go:115] Allocated netid 13662587 for namespace "e2e-k8s-service-lb-available-3239"\nI0615 23:02:16.014106       1 vnids.go:115] Allocated netid 4021500 for namespace "e2e-frontend-ingress-available-3030"\nE0615 23:02:41.617335       1 reflector.go:307] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://api-int.ci-op-04gnlsvw-6dd88.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=22468&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 10.0.233.9:6443: connect: connection refused\nE0615 23:09:03.204949       1 reflector.go:307] github.com/openshift/client-go/network/informers/externalversions/factory.go:101: Failed to watch *v1.HostSubnet: Get https://api-int.ci-op-04gnlsvw-6dd88.origin-ci-int-aws.dev.rhcloud.com:6443/apis/network.openshift.io/v1/hostsubnets?allowWatchBookmarks=true&resourceVersion=25223&timeout=5m15s&timeoutSeconds=315&watch=true: dial tcp 10.0.233.9:6443: connect: connection refused\n
Jun 15 23:17:43.250 E ns/openshift-sdn pod/sdn-controller-k8jhp node/ip-10-0-185-148.ec2.internal container=sdn-controller container exited with code 2 (Error): I0615 22:45:35.246267       1 leaderelection.go:242] attempting to acquire leader lease  openshift-sdn/openshift-network-controller...\nE0615 22:49:45.800442       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-04gnlsvw-6dd88.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.233.9:6443: connect: connection refused\nE0615 23:02:52.923654       1 leaderelection.go:331] error retrieving resource lock openshift-sdn/openshift-network-controller: Get https://api-int.ci-op-04gnlsvw-6dd88.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/openshift-sdn/configmaps/openshift-network-controller: dial tcp 10.0.138.168:6443: connect: connection refused\n
Jun 15 23:17:49.634 E ns/openshift-sdn pod/sdn-gxnlc node/ip-10-0-232-91.ec2.internal container=sdn container exited with code 255 (Error): ]\nI0615 23:14:43.027927    1815 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:14:43.027968    1815 proxier.go:347] userspace syncProxyRules took 30.313135ms\nI0615 23:15:13.163938    1815 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:15:13.163999    1815 proxier.go:347] userspace syncProxyRules took 28.273163ms\nI0615 23:15:43.317075    1815 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:15:43.317101    1815 proxier.go:347] userspace syncProxyRules took 35.950706ms\nI0615 23:16:13.471105    1815 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:16:13.471127    1815 proxier.go:347] userspace syncProxyRules took 29.473922ms\nI0615 23:16:43.611456    1815 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:16:43.611480    1815 proxier.go:347] userspace syncProxyRules took 28.871297ms\nI0615 23:17:13.773648    1815 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:17:13.773673    1815 proxier.go:347] userspace syncProxyRules took 29.927034ms\nI0615 23:17:36.656555    1815 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.17:6443 10.130.0.3:6443]\nI0615 23:17:36.656603    1815 roundrobin.go:217] Delete endpoint 10.129.0.2:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0615 23:17:36.656619    1815 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.130.0.3:8443]\nI0615 23:17:36.656627    1815 roundrobin.go:217] Delete endpoint 10.129.0.2:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0615 23:17:36.808983    1815 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:17:36.809011    1815 proxier.go:347] userspace syncProxyRules took 28.336745ms\nF0615 23:17:49.038175    1815 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Jun 15 23:18:07.347 E ns/openshift-multus pod/multus-admission-controller-dt9l2 node/ip-10-0-185-148.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Jun 15 23:18:07.561 E ns/openshift-multus pod/multus-vvjdv node/ip-10-0-170-171.ec2.internal container=kube-multus container exited with code 137 (Error): 
Jun 15 23:18:13.693 E ns/openshift-sdn pod/sdn-rkb89 node/ip-10-0-175-155.ec2.internal container=sdn container exited with code 255 (Error):  23:17:52.721994   67348 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:31397/tcp)\nI0615 23:17:52.758531   67348 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30483\nI0615 23:17:52.764914   67348 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0615 23:17:52.764948   67348 cmd.go:173] openshift-sdn network plugin registering startup\nI0615 23:17:52.765073   67348 cmd.go:177] openshift-sdn network plugin ready\nI0615 23:18:13.369962   67348 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.17:6443 10.129.0.65:6443 10.130.0.3:6443]\nI0615 23:18:13.370015   67348 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.129.0.65:8443 10.130.0.3:8443]\nI0615 23:18:13.380646   67348 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.17:6443 10.129.0.65:6443]\nI0615 23:18:13.380706   67348 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0615 23:18:13.380728   67348 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.129.0.65:8443]\nI0615 23:18:13.380743   67348 roundrobin.go:217] Delete endpoint 10.130.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0615 23:18:13.539944   67348 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:18:13.539974   67348 proxier.go:347] userspace syncProxyRules took 37.539397ms\nI0615 23:18:13.569916   67348 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0615 23:18:13.569946   67348 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 15 23:18:37.842 E ns/openshift-sdn pod/sdn-q2tz8 node/ip-10-0-218-23.ec2.internal container=sdn container exited with code 255 (Error):    1942 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:17:36.812298    1942 proxier.go:347] userspace syncProxyRules took 28.332724ms\nI0615 23:18:06.959030    1942 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:18:06.959053    1942 proxier.go:347] userspace syncProxyRules took 28.083367ms\nI0615 23:18:13.369783    1942 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.17:6443 10.129.0.65:6443 10.130.0.3:6443]\nI0615 23:18:13.369830    1942 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.129.0.65:8443 10.130.0.3:8443]\nI0615 23:18:13.381751    1942 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.129.0.65:8443]\nI0615 23:18:13.381785    1942 roundrobin.go:217] Delete endpoint 10.130.0.3:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0615 23:18:13.381805    1942 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.17:6443 10.129.0.65:6443]\nI0615 23:18:13.381819    1942 roundrobin.go:217] Delete endpoint 10.130.0.3:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0615 23:18:13.514643    1942 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:18:13.514667    1942 proxier.go:347] userspace syncProxyRules took 27.646264ms\nI0615 23:18:13.655582    1942 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:18:13.655609    1942 proxier.go:347] userspace syncProxyRules took 29.626427ms\nI0615 23:18:37.178297    1942 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0615 23:18:37.178336    1942 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 15 23:18:51.767 E ns/openshift-multus pod/multus-xq782 node/ip-10-0-175-155.ec2.internal container=kube-multus container exited with code 137 (Error): 
Jun 15 23:19:20.759 E ns/openshift-sdn pod/sdn-hkrt4 node/ip-10-0-170-171.ec2.internal container=sdn container exited with code 255 (Error): ing healthcheck "openshift-ingress/router-default" on port 30483\nI0615 23:18:33.067362   85481 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0615 23:18:33.067403   85481 cmd.go:173] openshift-sdn network plugin registering startup\nI0615 23:18:33.067544   85481 cmd.go:177] openshift-sdn network plugin ready\nI0615 23:18:51.042946   85481 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.17:6443 10.129.0.65:6443 10.130.0.79:6443]\nI0615 23:18:51.042990   85481 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.17:8443 10.129.0.65:8443 10.130.0.79:8443]\nI0615 23:18:51.053888   85481 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.129.0.65:6443 10.130.0.79:6443]\nI0615 23:18:51.053925   85481 roundrobin.go:217] Delete endpoint 10.128.0.17:6443 for service "openshift-multus/multus-admission-controller:webhook"\nI0615 23:18:51.053945   85481 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.129.0.65:8443 10.130.0.79:8443]\nI0615 23:18:51.053960   85481 roundrobin.go:217] Delete endpoint 10.128.0.17:8443 for service "openshift-multus/multus-admission-controller:metrics"\nI0615 23:18:51.183844   85481 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:18:51.183868   85481 proxier.go:347] userspace syncProxyRules took 29.383842ms\nI0615 23:18:51.318448   85481 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:18:51.318472   85481 proxier.go:347] userspace syncProxyRules took 28.203053ms\nI0615 23:19:19.717583   85481 ovs.go:169] Error executing ovs-ofctl: ovs-ofctl: /var/run/openvswitch/br0.mgmt: failed to open socket (Connection refused)\nF0615 23:19:19.717621   85481 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: OVS reinitialization required: plugin is not setup\n
Jun 15 23:19:21.968 E ns/openshift-multus pod/multus-admission-controller-zh2gw node/ip-10-0-161-67.ec2.internal container=multus-admission-controller container exited with code 137 (Error): 
Jun 15 23:19:47.201 E ns/openshift-sdn pod/sdn-rvzf7 node/ip-10-0-161-67.ec2.internal container=sdn container exited with code 255 (Error):  172.30.122.11:443/TCP\nI0615 23:19:17.566213  101987 proxier.go:766] Stale udp service openshift-dns/dns-default:dns -> 172.30.0.10\nI0615 23:19:17.696156  101987 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:19:17.696205  101987 proxier.go:347] userspace syncProxyRules took 129.683172ms\nI0615 23:19:17.753737  101987 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:http" (:32712/tcp)\nI0615 23:19:17.754193  101987 proxier.go:1609] Opened local port "nodePort for e2e-k8s-service-lb-available-3239/service-test:" (:32233/tcp)\nI0615 23:19:17.754781  101987 proxier.go:1609] Opened local port "nodePort for openshift-ingress/router-default:https" (:31397/tcp)\nI0615 23:19:17.794710  101987 service_health.go:98] Opening healthcheck "openshift-ingress/router-default" on port 30483\nI0615 23:19:17.803565  101987 proxy.go:305] openshift-sdn proxy services and endpoints initialized\nI0615 23:19:17.803603  101987 cmd.go:173] openshift-sdn network plugin registering startup\nI0615 23:19:17.803714  101987 cmd.go:177] openshift-sdn network plugin ready\nI0615 23:19:21.653940  101987 pod.go:539] CNI_DEL openshift-multus/multus-admission-controller-zh2gw\nI0615 23:19:26.633817  101987 pod.go:503] CNI_ADD openshift-multus/multus-admission-controller-7q7g5 got IP 10.128.0.83, ofport 84\nI0615 23:19:31.019136  101987 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:webhook to [10.128.0.83:6443 10.129.0.65:6443 10.130.0.79:6443]\nI0615 23:19:31.019285  101987 roundrobin.go:267] LoadBalancerRR: Setting endpoints for openshift-multus/multus-admission-controller:metrics to [10.128.0.83:8443 10.129.0.65:8443 10.130.0.79:8443]\nI0615 23:19:31.236216  101987 proxier.go:368] userspace proxy: processing 0 service events\nI0615 23:19:31.236243  101987 proxier.go:347] userspace syncProxyRules took 39.808821ms\nF0615 23:19:46.903713  101987 healthcheck.go:82] SDN healthcheck detected OVS server change, restarting: timed out waiting for the condition\n
Jun 15 23:20:32.858 E ns/openshift-multus pod/multus-r5cfp node/ip-10-0-185-148.ec2.internal container=kube-multus container exited with code 137 (Error): 
Jun 15 23:21:17.952 E ns/openshift-multus pod/multus-s6lnk node/ip-10-0-218-23.ec2.internal container=kube-multus container exited with code 137 (Error): 
Jun 15 23:22:01.690 E ns/openshift-multus pod/multus-fr2qq node/ip-10-0-161-67.ec2.internal container=kube-multus container exited with code 137 (Error): 
Jun 15 23:22:31.825 E ns/openshift-machine-config-operator pod/machine-config-operator-56678786dd-kfnbx node/ip-10-0-161-67.ec2.internal container=machine-config-operator container exited with code 2 (Error): d not find the requested resource (get machineconfigs.machineconfiguration.openshift.io)\nI0615 22:46:18.689997       1 operator.go:264] Starting MachineConfigOperator\nI0615 22:46:18.722463       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"1d8183ab-7742-4295-bdc9-da8d2cee9817", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator is bootstrapping to [{operator 0.0.1-2020-06-15-213103}]\nE0615 22:46:19.402253       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.ControllerConfig: the server could not find the requested resource (get controllerconfigs.machineconfiguration.openshift.io)\nE0615 22:46:19.414163       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nE0615 22:46:20.423643       1 reflector.go:153] github.com/openshift/machine-config-operator/pkg/generated/informers/externalversions/factory.go:101: Failed to list *v1.MachineConfigPool: the server could not find the requested resource (get machineconfigpools.machineconfiguration.openshift.io)\nI0615 22:46:24.192675       1 sync.go:61] [init mode] synced RenderConfig in 5.450372629s\nI0615 22:46:24.781356       1 sync.go:61] [init mode] synced MachineConfigPools in 588.162447ms\nI0615 22:46:45.422343       1 sync.go:61] [init mode] synced MachineConfigDaemon in 20.640932164s\nI0615 22:46:52.478660       1 sync.go:61] [init mode] synced MachineConfigController in 7.056272683s\nI0615 22:46:55.547995       1 sync.go:61] [init mode] synced MachineConfigServer in 3.069293756s\nI0615 22:47:10.558924       1 sync.go:61] [init mode] synced RequiredPools in 15.010864461s\nI0615 22:47:10.591076       1 sync.go:85] Initialization complete\n
Jun 15 23:24:27.499 E ns/openshift-machine-config-operator pod/machine-config-daemon-49dxn node/ip-10-0-218-23.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Jun 15 23:24:36.492 E ns/openshift-machine-config-operator pod/machine-config-daemon-9wgvn node/ip-10-0-175-155.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Jun 15 23:24:43.338 E ns/openshift-machine-config-operator pod/machine-config-daemon-77n24 node/ip-10-0-161-67.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Jun 15 23:24:48.453 E ns/openshift-machine-config-operator pod/machine-config-daemon-twnlz node/ip-10-0-170-171.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Jun 15 23:25:03.465 E ns/openshift-machine-config-operator pod/machine-config-daemon-8dkfb node/ip-10-0-232-91.ec2.internal container=oauth-proxy container exited with code 143 (Error): 
Jun 15 23:27:37.420 E ns/openshift-machine-config-operator pod/machine-config-server-hkzmg node/ip-10-0-185-148.ec2.internal container=machine-config-server container exited with code 2 (Error): I0615 22:46:53.821264       1 start.go:38] Version: machine-config-daemon-4.4.0-202006102218-4-gb6c95fea-dirty (b6c95fea3987483780994c8a5809a6afd15a633d)\nI0615 22:46:53.822557       1 api.go:56] Launching server on :22624\nI0615 22:46:53.822620       1 api.go:56] Launching server on :22623\nI0615 22:52:10.040508       1 api.go:102] Pool worker requested by 10.0.233.9:16947\n
Jun 15 23:27:40.987 E ns/openshift-machine-config-operator pod/machine-config-server-zvkgg node/ip-10-0-161-67.ec2.internal container=machine-config-server container exited with code 2 (Error): I0615 22:46:54.625548       1 start.go:38] Version: machine-config-daemon-4.4.0-202006102218-4-gb6c95fea-dirty (b6c95fea3987483780994c8a5809a6afd15a633d)\nI0615 22:46:54.626541       1 api.go:56] Launching server on :22624\nI0615 22:46:54.626668       1 api.go:56] Launching server on :22623\n
Jun 15 23:27:44.091 E ns/openshift-machine-config-operator pod/machine-config-server-xrjtc node/ip-10-0-232-91.ec2.internal container=machine-config-server container exited with code 2 (Error): I0615 22:46:54.433215       1 start.go:38] Version: machine-config-daemon-4.4.0-202006102218-4-gb6c95fea-dirty (b6c95fea3987483780994c8a5809a6afd15a633d)\nI0615 22:46:54.434418       1 api.go:56] Launching server on :22624\nI0615 22:46:54.434498       1 api.go:56] Launching server on :22623\nI0615 22:52:14.294464       1 api.go:102] Pool worker requested by 10.0.138.168:24128\nI0615 22:52:14.488299       1 api.go:102] Pool worker requested by 10.0.138.168:52126\n
Jun 15 23:27:51.593 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-fcbf8cc44-hpbtj node/ip-10-0-232-91.ec2.internal container=kube-controller-manager-operator container exited with code 255 (Error): Name:"kube-controller-manager-operator", UID:"4f4c9250-b824-43de-8c71-06fd643c3aea", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-232-91.ec2.internal pods/kube-controller-manager-ip-10-0-232-91.ec2.internal container=\"kube-controller-manager\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0615 23:14:04.909001       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-controller-manager-operator", Name:"kube-controller-manager-operator", UID:"4f4c9250-b824-43de-8c71-06fd643c3aea", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: nodes/ip-10-0-232-91.ec2.internal pods/kube-controller-manager-ip-10-0-232-91.ec2.internal container=\"kube-controller-manager\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"\nI0615 23:27:50.929896       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0615 23:27:50.930013       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0615 23:27:50.930052       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0615 23:27:50.930066       1 targetconfigcontroller.go:644] Shutting down TargetConfigController\nI0615 23:27:50.930081       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0615 23:27:50.930099       1 base_controller.go:74] Shutting down  ...\nI0615 23:27:50.930114       1 base_controller.go:74] Shutting down NodeController ...\nF0615 23:27:50.930118       1 builder.go:243] stopped\nI0615 23:27:50.930127       1 base_controller.go:74] Shutting down PruneController ...\n
Jun 15 23:27:54.654 E ns/openshift-machine-api pod/machine-api-controllers-b79d96849-2mnqj node/ip-10-0-232-91.ec2.internal container=controller-manager container exited with code 1 (Error): 
Jun 15 23:27:55.646 E ns/openshift-machine-api pod/machine-api-operator-78c8548dfd-gwxcz node/ip-10-0-232-91.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Jun 15 23:28:13.038 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-170-171.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-15T23:28:08.554Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-15T23:28:08.557Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-15T23:28:08.558Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-15T23:28:08.559Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-15T23:28:08.559Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-15T23:28:08.560Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-15T23:28:08.560Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-15T23:28:08.560Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-15T23:28:08.560Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-15T23:28:08.560Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-15T23:28:08.560Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-15T23:28:08.560Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-15T23:28:08.560Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-15T23:28:08.560Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-15T23:28:08.561Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-15T23:28:08.561Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-15
Jun 15 23:28:14.789 E ns/openshift-console pod/console-557bfcbb9b-mkl7q node/ip-10-0-232-91.ec2.internal container=console container exited with code 2 (Error): 2020-06-15T23:14:06Z cmd/main: cookies are secure!\n2020-06-15T23:14:06Z cmd/main: Binding to [::]:8443...\n2020-06-15T23:14:06Z cmd/main: using TLS\n
Jun 15 23:28:23.264 E ns/openshift-cluster-storage-operator pod/cluster-storage-operator-8f74b6f9d-tvzr7 node/ip-10-0-161-67.ec2.internal container=cluster-storage-operator container exited with code 1 (Error): {"level":"info","ts":1592263702.93343,"logger":"cmd","msg":"Go Version: go1.10.8"}\n{"level":"info","ts":1592263702.9334762,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}\n{"level":"info","ts":1592263702.933492,"logger":"cmd","msg":"Version of operator-sdk: v0.4.0"}\n{"level":"info","ts":1592263702.9367971,"logger":"leader","msg":"Trying to become the leader."}\n{"level":"error","ts":1592263702.9439368,"logger":"cmd","msg":"","error":"Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused","stacktrace":"github.com/openshift/cluster-storage-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-storage-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nmain.main\n\t/go/src/github.com/openshift/cluster-storage-operator/cmd/manager/main.go:53\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:198"}\n
Jun 15 23:29:20.371 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jun 15 23:30:20.179 E clusteroperator/openshift-apiserver changed Degraded to True: APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Jun 15 23:30:26.999 E ns/openshift-cluster-node-tuning-operator pod/tuned-p4rzf node/ip-10-0-218-23.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:27.011 E ns/openshift-image-registry pod/node-ca-x79mq node/ip-10-0-218-23.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:27.038 E ns/openshift-monitoring pod/node-exporter-zvrmd node/ip-10-0-218-23.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:27.050 E ns/openshift-sdn pod/ovs-8rcwx node/ip-10-0-218-23.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:27.063 E ns/openshift-sdn pod/sdn-sz9nj node/ip-10-0-218-23.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:27.080 E ns/openshift-multus pod/multus-nzn5g node/ip-10-0-218-23.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:27.098 E ns/openshift-dns pod/dns-default-nm9vw node/ip-10-0-218-23.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:27.113 E ns/openshift-machine-config-operator pod/machine-config-daemon-jrz8t node/ip-10-0-218-23.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:35.491 E ns/openshift-machine-config-operator pod/machine-config-daemon-jrz8t node/ip-10-0-218-23.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Jun 15 23:30:37.802 E ns/openshift-cluster-node-tuning-operator pod/tuned-bcqm5 node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.833 E ns/openshift-controller-manager pod/controller-manager-jfpmv node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.873 E ns/openshift-image-registry pod/node-ca-lk6ld node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.892 E ns/openshift-monitoring pod/node-exporter-wb4kn node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.907 E ns/openshift-sdn pod/sdn-controller-j5jh2 node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.921 E ns/openshift-sdn pod/sdn-ppbz4 node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.933 E ns/openshift-sdn pod/ovs-w2bgr node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.945 E ns/openshift-multus pod/multus-admission-controller-7mqzc node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.958 E ns/openshift-multus pod/multus-9ckv2 node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.969 E ns/openshift-dns pod/dns-default-xc8kk node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.983 E ns/openshift-machine-config-operator pod/machine-config-daemon-xxpk7 node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:37.995 E ns/openshift-machine-config-operator pod/machine-config-server-zr2ln node/ip-10-0-232-91.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:30:44.587 E ns/openshift-monitoring pod/thanos-querier-775977c79b-jsxqn node/ip-10-0-170-171.ec2.internal container=oauth-proxy container exited with code 2 (Error): vider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/15 23:12:47 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/15 23:12:47 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/15 23:12:47 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/15 23:12:47 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/15 23:12:47 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/06/15 23:12:47 http.go:107: HTTPS: listening on [::]:9091\nI0615 23:12:47.613115       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/15 23:14:01 oauthproxy.go:774: basicauth: 10.130.0.51:59610 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:15:01 oauthproxy.go:774: basicauth: 10.130.0.51:60858 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:19:01 oauthproxy.go:774: basicauth: 10.130.0.51:35524 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:20:01 oauthproxy.go:774: basicauth: 10.130.0.51:36316 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:21:01 oauthproxy.go:774: basicauth: 10.130.0.51:36968 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:22:01 oauthproxy.go:774: basicauth: 10.130.0.51:37656 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:27:53 oauthproxy.go:774: basicauth: 10.128.0.94:39932 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:29:03 oauthproxy.go:774: basicauth: 10.128.0.94:54006 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 15 23:30:44.619 E ns/openshift-kube-storage-version-migrator pod/migrator-696b55fbd5-6mzbs node/ip-10-0-170-171.ec2.internal container=migrator container exited with code 2 (Error): 
Jun 15 23:30:44.668 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-5cbb6cb644-xwclf node/ip-10-0-170-171.ec2.internal container=snapshot-controller container exited with code 2 (Error): 
Jun 15 23:30:45.740 E ns/openshift-monitoring pod/openshift-state-metrics-67dc667bfd-rxgcl node/ip-10-0-170-171.ec2.internal container=openshift-state-metrics container exited with code 2 (Error): 
Jun 15 23:30:46.804 E ns/openshift-marketplace pod/community-operators-7f985b8ccd-9g98q node/ip-10-0-170-171.ec2.internal container=community-operators container exited with code 2 (Error): 
Jun 15 23:30:46.830 E ns/openshift-monitoring pod/prometheus-adapter-c6b8b9df6-jm87k node/ip-10-0-170-171.ec2.internal container=prometheus-adapter container exited with code 2 (Error): I0615 23:12:37.441201       1 adapter.go:93] successfully using in-cluster auth\nI0615 23:12:38.213795       1 secure_serving.go:116] Serving securely on [::]:6443\n
Jun 15 23:30:46.874 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-170-171.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/06/15 23:28:05 Watching directory: "/etc/alertmanager/config"\n
Jun 15 23:30:46.874 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-170-171.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/06/15 23:28:05 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/15 23:28:05 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/15 23:28:05 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/15 23:28:05 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/06/15 23:28:05 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/15 23:28:05 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/15 23:28:05 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\nI0615 23:28:05.685168       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/15 23:28:05 http.go:107: HTTPS: listening on [::]:9095\n
Jun 15 23:30:49.068 E ns/openshift-machine-config-operator pod/machine-config-daemon-xxpk7 node/ip-10-0-232-91.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Jun 15 23:31:18.945 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-218-23.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-15T23:31:16.922Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-15T23:31:16.931Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-15T23:31:16.931Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-15T23:31:16.932Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-15T23:31:16.932Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-15T23:31:16.932Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-15T23:31:16.933Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-15T23:31:16.933Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-15T23:31:16.933Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-15T23:31:16.933Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-15T23:31:16.933Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-15T23:31:16.933Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-15T23:31:16.933Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-15T23:31:16.933Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-15T23:31:16.934Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-15T23:31:16.934Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-15
Jun 15 23:31:22.869 E ns/openshift-console pod/console-557bfcbb9b-gqwdp node/ip-10-0-185-148.ec2.internal container=console container exited with code 2 (Error): 2020-06-15T23:13:59Z cmd/main: cookies are secure!\n2020-06-15T23:13:59Z cmd/main: Binding to [::]:8443...\n2020-06-15T23:13:59Z cmd/main: using TLS\n2020-06-15T23:18:47Z auth: failed to get latest auth source data: request to OAuth issuer endpoint https://oauth-openshift.apps.ci-op-04gnlsvw-6dd88.origin-ci-int-aws.dev.rhcloud.com/oauth/token failed: Head https://oauth-openshift.apps.ci-op-04gnlsvw-6dd88.origin-ci-int-aws.dev.rhcloud.com: EOF\n
Jun 15 23:32:28.555 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jun 15 23:33:28.260 E ns/openshift-cluster-node-tuning-operator pod/tuned-kw8t5 node/ip-10-0-170-171.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:28.280 E ns/openshift-image-registry pod/node-ca-c97bg node/ip-10-0-170-171.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:28.299 E ns/openshift-monitoring pod/node-exporter-45dcx node/ip-10-0-170-171.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:28.318 E ns/openshift-multus pod/multus-j6cfb node/ip-10-0-170-171.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:28.346 E ns/openshift-sdn pod/ovs-r2twj node/ip-10-0-170-171.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:28.361 E ns/openshift-dns pod/dns-default-c8mxz node/ip-10-0-170-171.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:28.374 E ns/openshift-machine-config-operator pod/machine-config-daemon-sfgkg node/ip-10-0-170-171.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:41.031 E ns/openshift-machine-config-operator pod/machine-config-daemon-sfgkg node/ip-10-0-170-171.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Jun 15 23:33:46.548 E ns/openshift-image-registry pod/node-ca-bdmqg node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.594 E ns/openshift-controller-manager pod/controller-manager-bc6zw node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.626 E ns/openshift-cluster-node-tuning-operator pod/tuned-tbqtk node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.642 E ns/openshift-monitoring pod/node-exporter-tv8bt node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.660 E ns/openshift-sdn pod/sdn-controller-4splf node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.675 E ns/openshift-multus pod/multus-admission-controller-478tl node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.690 E ns/openshift-sdn pod/ovs-zh4wk node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.709 E ns/openshift-sdn pod/sdn-fgbrn node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.727 E ns/openshift-multus pod/multus-4srlm node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.744 E ns/openshift-dns pod/dns-default-rfkgm node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.772 E ns/openshift-machine-config-operator pod/machine-config-daemon-fqv4w node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:33:46.823 E ns/openshift-machine-config-operator pod/machine-config-server-sgkfb node/ip-10-0-185-148.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:34:06.951 E ns/openshift-monitoring pod/prometheus-k8s-0 node/ip-10-0-175-155.ec2.internal container=prometheus container exited with code 1 (Error): caller=main.go:648 msg="Starting TSDB ..."\nlevel=info ts=2020-06-15T23:13:19.396Z caller=web.go:506 component=web msg="Start listening for connections" address=127.0.0.1:9090\nlevel=info ts=2020-06-15T23:13:19.401Z caller=head.go:584 component=tsdb msg="replaying WAL, this may take awhile"\nlevel=info ts=2020-06-15T23:13:19.402Z caller=head.go:632 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:663 fs_type=XFS_SUPER_MAGIC\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:664 msg="TSDB started"\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:734 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:517 msg="Stopping scrape discovery manager..."\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:531 msg="Stopping notify discovery manager..."\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:553 msg="Stopping scrape manager..."\nlevel=info ts=2020-06-15T23:13:19.403Z caller=manager.go:814 component="rule manager" msg="Stopping rule manager..."\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:513 msg="Scrape discovery manager stopped"\nlevel=info ts=2020-06-15T23:13:19.403Z caller=manager.go:820 component="rule manager" msg="Rule manager stopped"\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:527 msg="Notify discovery manager stopped"\nlevel=info ts=2020-06-15T23:13:19.403Z caller=main.go:547 msg="Scrape manager stopped"\nlevel=info ts=2020-06-15T23:13:19.404Z caller=notifier.go:598 component=notifier msg="Stopping notification manager..."\nlevel=info ts=2020-06-15T23:13:19.404Z caller=main.go:718 msg="Notifier manager stopped"\nlevel=error ts=2020-06-15
Jun 15 23:34:08.339 E ns/openshift-monitoring pod/telemeter-client-56757c7cf4-grcb4 node/ip-10-0-175-155.ec2.internal container=reload container exited with code 2 (Error): 
Jun 15 23:34:08.339 E ns/openshift-monitoring pod/telemeter-client-56757c7cf4-grcb4 node/ip-10-0-175-155.ec2.internal container=telemeter-client container exited with code 2 (Error): 
Jun 15 23:34:08.379 E ns/openshift-monitoring pod/thanos-querier-775977c79b-6x8hg node/ip-10-0-175-155.ec2.internal container=oauth-proxy container exited with code 2 (Error): 2020/06/15 23:27:49 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/15 23:27:49 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/15 23:27:49 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/15 23:27:49 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9090/"\n2020/06/15 23:27:49 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/15 23:27:49 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:thanos-querier\n2020/06/15 23:27:49 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/15 23:27:49 main.go:154: using htpasswd file /etc/proxy/htpasswd/auth\n2020/06/15 23:27:49 http.go:107: HTTPS: listening on [::]:9091\nI0615 23:27:49.409960       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n2020/06/15 23:29:53 oauthproxy.go:774: basicauth: 10.128.0.94:36058 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:30:55 oauthproxy.go:774: basicauth: 10.128.0.94:41392 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:32:53 oauthproxy.go:774: basicauth: 10.128.0.94:35516 Authorization header does not start with 'Basic', skipping basic authentication\n2020/06/15 23:33:53 oauthproxy.go:774: basicauth: 10.128.0.94:40352 Authorization header does not start with 'Basic', skipping basic authentication\n
Jun 15 23:34:08.413 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-175-155.ec2.internal container=config-reloader container exited with code 2 (Error): 2020/06/15 23:31:03 Watching directory: "/etc/alertmanager/config"\n
Jun 15 23:34:08.413 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-175-155.ec2.internal container=alertmanager-proxy container exited with code 2 (Error): 2020/06/15 23:31:03 provider.go:118: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/15 23:31:03 provider.go:123: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2020/06/15 23:31:03 provider.go:311: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2020/06/15 23:31:03 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:9093/"\n2020/06/15 23:31:03 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"\n2020/06/15 23:31:03 oauthproxy.go:227: OAuthProxy configured for  Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2020/06/15 23:31:03 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled\n2020/06/15 23:31:03 http.go:107: HTTPS: listening on [::]:9095\nI0615 23:31:03.826095       1 dynamic_serving_content.go:129] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n
Jun 15 23:34:08.477 E ns/openshift-marketplace pod/community-operators-7f985b8ccd-jd5tm node/ip-10-0-175-155.ec2.internal container=community-operators container exited with code 2 (Error): 
Jun 15 23:34:12.659 E ns/openshift-controller-manager-operator pod/openshift-controller-manager-operator-58b96c8f9b-8qzkj node/ip-10-0-161-67.ec2.internal container=operator container exited with code 255 (Error): controller-manager/roles/prometheus-k8s\nI0615 23:33:07.586358       1 request.go:565] Throttling request took 196.620839ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0615 23:33:27.383790       1 request.go:565] Throttling request took 157.936138ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0615 23:33:27.583805       1 request.go:565] Throttling request took 197.02151ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0615 23:33:28.695961       1 httplog.go:90] GET /metrics: (8.311939ms) 200 [Prometheus/2.15.2 10.129.2.22:41640]\nI0615 23:33:34.262238       1 httplog.go:90] GET /metrics: (2.475024ms) 200 [Prometheus/2.15.2 10.128.2.22:56922]\nI0615 23:33:47.383896       1 request.go:565] Throttling request took 152.044601ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/roles/prometheus-k8s\nI0615 23:33:47.583953       1 request.go:565] Throttling request took 196.91411ms, request: GET:https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-controller-manager/rolebindings/prometheus-k8s\nI0615 23:33:58.693713       1 httplog.go:90] GET /metrics: (6.047936ms) 200 [Prometheus/2.15.2 10.129.2.22:41640]\nI0615 23:34:04.263311       1 httplog.go:90] GET /metrics: (3.454858ms) 200 [Prometheus/2.15.2 10.128.2.22:56922]\nI0615 23:34:07.368264       1 cmd.go:83] Received SIGTERM or SIGINT signal, shutting down controller.\nI0615 23:34:07.370799       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0615 23:34:07.370826       1 status_controller.go:212] Shutting down StatusSyncer-openshift-controller-manager\nI0615 23:34:07.371141       1 operator.go:135] Shutting down OpenShiftControllerManagerOperator\nF0615 23:34:07.371360       1 builder.go:243] stopped\n
Jun 15 23:34:13.955 E ns/openshift-service-catalog-controller-manager-operator pod/openshift-service-catalog-controller-manager-operator-fccb29bxz node/ip-10-0-161-67.ec2.internal container=operator container exited with code 255 (Error): 0.17.1/tools/cache/reflector.go:105\nI0615 23:32:40.101218       1 reflector.go:185] Listing and watching *v1.Proxy from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0615 23:32:40.101328       1 reflector.go:185] Listing and watching *v1.ServiceCatalogControllerManager from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0615 23:32:40.101001       1 reflector.go:185] Listing and watching *v1.Deployment from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0615 23:32:40.101591       1 reflector.go:185] Listing and watching *v1.ConfigMap from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0615 23:32:40.101019       1 reflector.go:185] Listing and watching *v1.ServiceAccount from k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105\nI0615 23:33:00.230099       1 httplog.go:90] GET /metrics: (8.118146ms) 200 [Prometheus/2.15.2 10.128.2.22:36810]\nI0615 23:33:07.294433       1 httplog.go:90] GET /metrics: (2.095527ms) 200 [Prometheus/2.15.2 10.129.2.22:51046]\nI0615 23:33:30.230515       1 httplog.go:90] GET /metrics: (8.560056ms) 200 [Prometheus/2.15.2 10.128.2.22:36810]\nI0615 23:33:37.294606       1 httplog.go:90] GET /metrics: (2.16585ms) 200 [Prometheus/2.15.2 10.129.2.22:51046]\nI0615 23:33:41.956255       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0615 23:33:41.976166       1 reflector.go:268] k8s.io/client-go@v0.17.1/tools/cache/reflector.go:105: forcing resync\nI0615 23:34:00.230351       1 httplog.go:90] GET /metrics: (8.157442ms) 200 [Prometheus/2.15.2 10.128.2.22:36810]\nI0615 23:34:10.728624       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0615 23:34:10.729361       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0615 23:34:10.729522       1 status_controller.go:212] Shutting down StatusSyncer-service-catalog-controller-manager\nI0615 23:34:10.729533       1 operator.go:227] Shutting down ServiceCatalogControllerManagerOperator\nF0615 23:34:10.729578       1 builder.go:243] stopped\n
Jun 15 23:34:14.075 E ns/openshift-kube-controller-manager-operator pod/kube-controller-manager-operator-fcbf8cc44-98jmw node/ip-10-0-161-67.ec2.internal container=kube-controller-manager-operator container exited with code 255 (Error): ager-ip-10-0-185-148.ec2.internal container=\"kube-controller-manager\" is not ready\nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: nodes/ip-10-0-185-148.ec2.internal pods/kube-controller-manager-ip-10-0-185-148.ec2.internal container=\"kube-controller-manager\" is not ready\nNodeControllerDegraded: All master nodes are ready"\nI0615 23:34:10.442705       1 cmd.go:84] Received SIGTERM or SIGINT signal, shutting down controller.\nI0615 23:34:10.445327       1 satokensigner_controller.go:332] Shutting down SATokenSignerController\nI0615 23:34:10.445417       1 base_controller.go:74] Shutting down RevisionController ...\nI0615 23:34:10.445475       1 resourcesync_controller.go:228] Shutting down ResourceSyncController\nI0615 23:34:10.445543       1 config_observer_controller.go:160] Shutting down ConfigObserver\nI0615 23:34:10.445591       1 base_controller.go:74] Shutting down InstallerStateController ...\nI0615 23:34:10.445635       1 base_controller.go:74] Shutting down StaticPodStateController ...\nI0615 23:34:10.445677       1 base_controller.go:74] Shutting down InstallerController ...\nI0615 23:34:10.445748       1 base_controller.go:74] Shutting down UnsupportedConfigOverridesController ...\nI0615 23:34:10.454826       1 base_controller.go:74] Shutting down PruneController ...\nI0615 23:34:10.454883       1 base_controller.go:74] Shutting down LoggingSyncer ...\nI0615 23:34:10.454957       1 status_controller.go:212] Shutting down StatusSyncer-kube-controller-manager\nI0615 23:34:10.455003       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0615 23:34:10.455046       1 client_cert_rotation_controller.go:180] Shutting down CertRotationController - "CSRSigningCert"\nI0615 23:34:10.455307       1 base_controller.go:74] Shutting down  ...\nI0615 23:34:10.455364       1 base_controller.go:74] Shutting down NodeController ...\nI0615 23:34:10.455403       1 targetconfigcontroller.go:644] Shutting down TargetConfigController\nF0615 23:34:10.455873       1 builder.go:243] stopped\n
Jun 15 23:34:14.155 E ns/openshift-apiserver-operator pod/openshift-apiserver-operator-57c87b8d9b-d9t9k node/ip-10-0-161-67.ec2.internal container=openshift-apiserver-operator container exited with code 255 (Error): nces are unavailable"\nI0615 23:31:19.407405       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"a06ad025-49ee-44e6-810f-164de0841a97", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 2 of 3 requested instances are unavailable" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable"\nI0615 23:34:09.519636       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-apiserver-operator", Name:"openshift-apiserver-operator", UID:"a06ad025-49ee-44e6-810f-164de0841a97", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable" to "APIServerDeploymentDegraded: 2 of 3 requested instances are unavailable"\nI0615 23:34:11.518548       1 cmd.go:79] Received SIGTERM or SIGINT signal, shutting down controller.\nI0615 23:34:11.518693       1 tlsconfig.go:234] Shutting down DynamicServingCertificateController\nI0615 23:34:11.519210       1 remove_stale_conditions.go:84] Shutting down RemoveStaleConditions\nI0615 23:34:11.519313       1 base_controller.go:73] Shutting down LoggingSyncer ...\nI0615 23:34:11.519344       1 base_controller.go:73] Shutting down UnsupportedConfigOverridesController ...\nI0615 23:34:11.519360       1 base_controller.go:73] Shutting down RevisionController ...\nI0615 23:34:11.519371       1 base_controller.go:73] Shutting down  ...\nI0615 23:34:11.519685       1 apiservice_controller.go:215] Shutting down APIServiceController_openshift-apiserver\nI0615 23:34:11.519716       1 workload_controller.go:177] Shutting down OpenShiftAPIServerOperator\nF0615 23:34:11.519732       1 builder.go:243] stopped\n
Jun 15 23:34:14.217 E ns/openshift-service-ca-operator pod/service-ca-operator-7b7764c688-fknvx node/ip-10-0-161-67.ec2.internal container=operator container exited with code 255 (Error): 
Jun 15 23:34:14.328 E ns/openshift-machine-api pod/machine-api-operator-78c8548dfd-qh7sg node/ip-10-0-161-67.ec2.internal container=machine-api-operator container exited with code 2 (Error): 
Jun 15 23:34:14.346 E ns/openshift-service-ca pod/service-ca-55d8bcff97-fhsdv node/ip-10-0-161-67.ec2.internal container=service-ca-controller container exited with code 255 (Error): 
Jun 15 23:34:15.145 E ns/openshift-machine-config-operator pod/machine-config-operator-594d979ff9-6kcxx node/ip-10-0-161-67.ec2.internal container=machine-config-operator container exited with code 2 (Error): nfig...\nE0615 23:24:25.582768       1 event.go:319] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config", UID:"ab58541e-2f25-4b9b-b3de-cdbc99949e47", ResourceVersion:"35241", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727857978, loc:(*time.Location)(0x27f8080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-operator-594d979ff9-6kcxx_3bcbec85-7abf-472a-86e3-afaf61f099d9\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2020-06-15T23:24:25Z\",\"renewTime\":\"2020-06-15T23:24:25Z\",\"leaderTransitions\":1}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-operator-594d979ff9-6kcxx_3bcbec85-7abf-472a-86e3-afaf61f099d9 became leader'\nI0615 23:24:25.582849       1 leaderelection.go:252] successfully acquired lease openshift-machine-config-operator/machine-config\nI0615 23:24:26.206986       1 operator.go:264] Starting MachineConfigOperator\nI0615 23:24:26.213516       1 event.go:281] Event(v1.ObjectReference{Kind:"", Namespace:"", Name:"machine-config", UID:"1d8183ab-7742-4295-bdc9-da8d2cee9817", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorVersionChanged' clusteroperator/machine-config-operator started a version change from [{operator 0.0.1-2020-06-15-213103}] to [{operator 0.0.1-2020-06-15-213918}]\n
Jun 15 23:35:23.624 E ns/openshift-marketplace pod/redhat-marketplace-54d7646774-h7shh node/ip-10-0-218-23.ec2.internal container=redhat-marketplace container exited with code 2 (Error): 
Jun 15 23:35:44.129 E clusteroperator/dns changed Degraded to True: NotAllDNSesAvailable: Not all desired DNS DaemonSets available
Jun 15 23:36:07.273 E kube-apiserver failed contacting the API: Get https://api.ci-op-04gnlsvw-6dd88.origin-ci-int-aws.dev.rhcloud.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=40723&timeout=7m11s&timeoutSeconds=431&watch=true: dial tcp 3.215.116.34:6443: connect: connection refused
Jun 15 23:36:48.905 E ns/openshift-cluster-node-tuning-operator pod/tuned-82kxn node/ip-10-0-175-155.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:36:48.929 E ns/openshift-monitoring pod/node-exporter-4xkz9 node/ip-10-0-175-155.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:36:48.944 E ns/openshift-image-registry pod/node-ca-6chdf node/ip-10-0-175-155.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:36:48.975 E ns/openshift-sdn pod/ovs-4xv66 node/ip-10-0-175-155.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:36:48.992 E ns/openshift-multus pod/multus-rhb5l node/ip-10-0-175-155.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:36:49.004 E ns/openshift-dns pod/dns-default-wrp7b node/ip-10-0-175-155.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:36:49.020 E ns/openshift-machine-config-operator pod/machine-config-daemon-2zlq8 node/ip-10-0-175-155.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:36:57.425 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator openshift-apiserver is reporting a failure: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable
Jun 15 23:37:00.816 E ns/openshift-machine-config-operator pod/machine-config-daemon-2zlq8 node/ip-10-0-175-155.ec2.internal container=oauth-proxy container exited with code 1 (Error): 
Jun 15 23:37:11.038 E ns/openshift-cluster-node-tuning-operator pod/tuned-5z62w node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:11.071 E ns/openshift-monitoring pod/node-exporter-zcgzc node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:11.107 E ns/openshift-controller-manager pod/controller-manager-qsj4z node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:11.129 E ns/openshift-image-registry pod/node-ca-tfpkt node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:11.166 E ns/openshift-sdn pod/sdn-controller-6l5nq node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:11.215 E ns/openshift-multus pod/multus-admission-controller-7q7g5 node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:11.259 E ns/openshift-sdn pod/ovs-54hpm node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:11.282 E ns/openshift-multus pod/multus-hr5ds node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:11.295 E ns/openshift-dns pod/dns-default-7l94m node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:11.318 E ns/openshift-machine-config-operator pod/machine-config-daemon-lcn8d node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:11.332 E ns/openshift-machine-config-operator pod/machine-config-server-b9hhn node/ip-10-0-161-67.ec2.internal invariant violation: pod may not transition Running->Pending
Jun 15 23:37:32.526 E ns/openshift-machine-config-operator pod/machine-config-daemon-lcn8d node/ip-10-0-161-67.ec2.internal container=oauth-proxy container exited with code 1 (Error):